Test Report: QEMU_macOS 18644

                    
                      382efc9ec0890000466ab6258d7a89af3764444c:2024-04-15:34035
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.49
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.04
36 TestAddons/Setup 9.92
37 TestCertOptions 12.1
38 TestCertExpiration 197.25
39 TestDockerFlags 12.18
40 TestForceSystemdFlag 12.13
41 TestForceSystemdEnv 10.25
47 TestErrorSpam/setup 9.77
56 TestFunctional/serial/StartWithProxy 9.9
58 TestFunctional/serial/SoftStart 5.26
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.05
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.17
70 TestFunctional/serial/MinikubeKubectlCmd 0.69
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.94
72 TestFunctional/serial/ExtraConfig 5.26
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.07
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.13
86 TestFunctional/parallel/ServiceCmdConnect 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.14
91 TestFunctional/parallel/CpCmd 0.3
93 TestFunctional/parallel/FileSync 0.08
94 TestFunctional/parallel/CertSync 0.3
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.05
104 TestFunctional/parallel/Version/components 0.04
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
109 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
111 TestFunctional/parallel/DockerEnv/bash 0.05
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
115 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
116 TestFunctional/parallel/ServiceCmd/List 0.05
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
119 TestFunctional/parallel/ServiceCmd/Format 0.05
120 TestFunctional/parallel/ServiceCmd/URL 0.05
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 119.8
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.3
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.32
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.55
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 26.48
150 TestMultiControlPlane/serial/StartCluster 9.85
151 TestMultiControlPlane/serial/DeployApp 113.98
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.07
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.11
156 TestMultiControlPlane/serial/CopyFile 0.06
157 TestMultiControlPlane/serial/StopSecondaryNode 0.12
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.1
159 TestMultiControlPlane/serial/RestartSecondaryNode 54.28
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.1
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.5
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.1
164 TestMultiControlPlane/serial/StopCluster 3.85
165 TestMultiControlPlane/serial/RestartCluster 5.26
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.11
167 TestMultiControlPlane/serial/AddSecondaryNode 0.07
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.1
171 TestImageBuild/serial/Setup 10
174 TestJSONOutput/start/Command 9.91
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.05
203 TestMinikubeProfile 10.29
206 TestMountStart/serial/StartWithMountFirst 10.01
209 TestMultiNode/serial/FreshStart2Nodes 9.88
210 TestMultiNode/serial/DeployApp2Nodes 90.18
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.08
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.11
215 TestMultiNode/serial/CopyFile 0.06
216 TestMultiNode/serial/StopNode 0.14
217 TestMultiNode/serial/StartAfterStop 56.54
218 TestMultiNode/serial/RestartKeepsNodes 9.02
219 TestMultiNode/serial/DeleteNode 0.11
220 TestMultiNode/serial/StopMultiNode 3.36
221 TestMultiNode/serial/RestartMultiNode 5.26
222 TestMultiNode/serial/ValidateNameConflict 20.12
226 TestPreload 9.97
228 TestScheduledStopUnix 10.15
229 TestSkaffold 12.32
232 TestRunningBinaryUpgrade 595.66
234 TestKubernetesUpgrade 17.18
238 TestNoKubernetes/serial/StartWithK8s 12.42
250 TestNoKubernetes/serial/StartWithStopK8s 5.37
251 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.51
252 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.21
253 TestNoKubernetes/serial/Start 5.32
257 TestNoKubernetes/serial/StartNoArgs 7.59
260 TestStoppedBinaryUpgrade/Upgrade 563.99
269 TestPause/serial/Start 9.84
270 TestNetworkPlugins/group/auto/Start 10.03
271 TestNetworkPlugins/group/kindnet/Start 9.73
272 TestNetworkPlugins/group/flannel/Start 9.76
273 TestNetworkPlugins/group/enable-default-cni/Start 9.8
274 TestNetworkPlugins/group/bridge/Start 9.83
275 TestNetworkPlugins/group/kubenet/Start 9.75
276 TestNetworkPlugins/group/custom-flannel/Start 9.87
277 TestNetworkPlugins/group/calico/Start 9.73
278 TestNetworkPlugins/group/false/Start 9.74
280 TestStartStop/group/old-k8s-version/serial/FirstStart 9.9
281 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
282 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
285 TestStartStop/group/old-k8s-version/serial/SecondStart 5.24
286 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
287 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
288 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
289 TestStartStop/group/old-k8s-version/serial/Pause 0.1
291 TestStartStop/group/no-preload/serial/FirstStart 9.98
292 TestStartStop/group/no-preload/serial/DeployApp 0.09
293 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
296 TestStartStop/group/no-preload/serial/SecondStart 5.26
297 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
298 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
300 TestStartStop/group/no-preload/serial/Pause 0.11
302 TestStartStop/group/embed-certs/serial/FirstStart 9.86
303 TestStartStop/group/embed-certs/serial/DeployApp 0.09
304 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
308 TestStartStop/group/embed-certs/serial/SecondStart 5.26
309 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
310 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
311 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
312 TestStartStop/group/embed-certs/serial/Pause 0.1
314 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.83
315 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
316 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.13
319 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.24
320 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
321 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
322 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
323 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
325 TestStartStop/group/newest-cni/serial/FirstStart 9.83
330 TestStartStop/group/newest-cni/serial/SecondStart 5.29
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
334 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (12.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-387000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-387000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (12.484506458s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e0ae04fe-6a77-4b7a-ab25-e77588b44da1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-387000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b1a53d73-fec4-413e-9fef-f930912ef5c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18644"}}
	{"specversion":"1.0","id":"30ee5966-3714-42f1-8833-0cc48cc8052a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig"}}
	{"specversion":"1.0","id":"592c321f-e0f3-4db9-8883-9f8a782c41ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"c68ba236-4adc-48b6-9aa9-4d3785934bd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c521e688-3675-4108-ae77-5387cd88272f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube"}}
	{"specversion":"1.0","id":"36964109-6d4a-4f16-945d-3161bbecdc94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"cf423b3f-b7d9-4935-a4fc-5687e3202496","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"98b6ab88-56dc-4683-8e8e-43264b7b5fb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"395652ab-4d5b-443a-b37d-05c2f7053ab2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fbccbf56-967a-4723-ad88-5b9fb692a49d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-387000\" primary control-plane node in \"download-only-387000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"05704f7a-93b2-4ba4-b60b-2b21b6741d20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"eebdf223-00da-4e70-894e-6f4ed426f8ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109127260 0x109127260 0x109127260 0x109127260 0x109127260 0x109127260 0x109127260] Decompressors:map[bz2:0x1400058e348 gz:0x1400058e3d0 tar:0x1400058e380 tar.bz2:0x1400058e390 tar.gz:0x1400058e3a0 tar.xz:0x1400058e3b0 tar.zst:0x1400058e3c0 tbz2:0x1400058e390 tgz:0x1
400058e3a0 txz:0x1400058e3b0 tzst:0x1400058e3c0 xz:0x1400058e3d8 zip:0x1400058e3e0 zst:0x1400058e3f0] Getters:map[file:0x14002504640 http:0x140004ba1e0 https:0x140004ba230] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"826b00f9-3121-4ee0-a339-89d670699b3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 04:53:17.183483   12717 out.go:291] Setting OutFile to fd 1 ...
	I0415 04:53:17.183639   12717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:53:17.183642   12717 out.go:304] Setting ErrFile to fd 2...
	I0415 04:53:17.183644   12717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:53:17.183767   12717 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	W0415 04:53:17.183877   12717 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18644-12307/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18644-12307/.minikube/config/config.json: no such file or directory
	I0415 04:53:17.185111   12717 out.go:298] Setting JSON to true
	I0415 04:53:17.201430   12717 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6768,"bootTime":1713175229,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 04:53:17.201523   12717 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 04:53:17.208138   12717 out.go:97] [download-only-387000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 04:53:17.209810   12717 out.go:169] MINIKUBE_LOCATION=18644
	W0415 04:53:17.208289   12717 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball: no such file or directory
	I0415 04:53:17.208298   12717 notify.go:220] Checking for updates...
	I0415 04:53:17.219069   12717 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 04:53:17.223043   12717 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 04:53:17.226059   12717 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 04:53:17.229028   12717 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	W0415 04:53:17.235033   12717 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 04:53:17.235257   12717 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 04:53:17.237993   12717 out.go:97] Using the qemu2 driver based on user configuration
	I0415 04:53:17.238012   12717 start.go:297] selected driver: qemu2
	I0415 04:53:17.238029   12717 start.go:901] validating driver "qemu2" against <nil>
	I0415 04:53:17.238116   12717 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 04:53:17.241024   12717 out.go:169] Automatically selected the socket_vmnet network
	I0415 04:53:17.246364   12717 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0415 04:53:17.246464   12717 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 04:53:17.246548   12717 cni.go:84] Creating CNI manager for ""
	I0415 04:53:17.246556   12717 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0415 04:53:17.246606   12717 start.go:340] cluster config:
	{Name:download-only-387000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-387000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 04:53:17.251664   12717 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 04:53:17.255082   12717 out.go:97] Downloading VM boot image ...
	I0415 04:53:17.255102   12717 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso
	I0415 04:53:22.479828   12717 out.go:97] Starting "download-only-387000" primary control-plane node in "download-only-387000" cluster
	I0415 04:53:22.479853   12717 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 04:53:22.533703   12717 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0415 04:53:22.533716   12717 cache.go:56] Caching tarball of preloaded images
	I0415 04:53:22.534201   12717 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 04:53:22.538964   12717 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0415 04:53:22.538979   12717 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0415 04:53:22.611304   12717 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0415 04:53:28.205077   12717 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0415 04:53:28.205238   12717 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0415 04:53:28.900854   12717 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0415 04:53:28.901059   12717 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/download-only-387000/config.json ...
	I0415 04:53:28.901085   12717 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/download-only-387000/config.json: {Name:mk6b93fa5cb41271a937c1d2300d331d246071d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 04:53:28.901310   12717 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 04:53:28.901491   12717 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0415 04:53:29.586820   12717 out.go:169] 
	W0415 04:53:29.592947   12717 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109127260 0x109127260 0x109127260 0x109127260 0x109127260 0x109127260 0x109127260] Decompressors:map[bz2:0x1400058e348 gz:0x1400058e3d0 tar:0x1400058e380 tar.bz2:0x1400058e390 tar.gz:0x1400058e3a0 tar.xz:0x1400058e3b0 tar.zst:0x1400058e3c0 tbz2:0x1400058e390 tgz:0x1400058e3a0 txz:0x1400058e3b0 tzst:0x1400058e3c0 xz:0x1400058e3d8 zip:0x1400058e3e0 zst:0x1400058e3f0] Getters:map[file:0x14002504640 http:0x140004ba1e0 https:0x140004ba230] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0415 04:53:29.592972   12717 out_reason.go:110] 
	W0415 04:53:29.599817   12717 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 04:53:29.603851   12717 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-387000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (12.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.04s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-657000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-657000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.847453708s)

                                                
                                                
-- stdout --
	* [offline-docker-657000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-657000" primary control-plane node in "offline-docker-657000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-657000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:05:45.193536   14328 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:05:45.193659   14328 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:05:45.193663   14328 out.go:304] Setting ErrFile to fd 2...
	I0415 05:05:45.193665   14328 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:05:45.193791   14328 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:05:45.195114   14328 out.go:298] Setting JSON to false
	I0415 05:05:45.215927   14328 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7516,"bootTime":1713175229,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:05:45.215999   14328 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:05:45.218259   14328 out.go:177] * [offline-docker-657000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:05:45.238495   14328 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:05:45.234542   14328 notify.go:220] Checking for updates...
	I0415 05:05:45.247490   14328 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:05:45.257475   14328 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:05:45.260373   14328 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:05:45.263466   14328 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:05:45.266434   14328 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:05:45.268413   14328 config.go:182] Loaded profile config "multinode-671000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:05:45.268467   14328 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:05:45.274461   14328 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 05:05:45.281463   14328 start.go:297] selected driver: qemu2
	I0415 05:05:45.281470   14328 start.go:901] validating driver "qemu2" against <nil>
	I0415 05:05:45.281477   14328 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:05:45.283603   14328 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 05:05:45.290407   14328 out.go:177] * Automatically selected the socket_vmnet network
	I0415 05:05:45.294516   14328 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:05:45.294563   14328 cni.go:84] Creating CNI manager for ""
	I0415 05:05:45.294570   14328 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:05:45.294575   14328 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 05:05:45.294615   14328 start.go:340] cluster config:
	{Name:offline-docker-657000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-657000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:05:45.299419   14328 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:05:45.309241   14328 out.go:177] * Starting "offline-docker-657000" primary control-plane node in "offline-docker-657000" cluster
	I0415 05:05:45.313457   14328 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:05:45.313486   14328 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 05:05:45.313495   14328 cache.go:56] Caching tarball of preloaded images
	I0415 05:05:45.313582   14328 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:05:45.313590   14328 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:05:45.313665   14328 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/offline-docker-657000/config.json ...
	I0415 05:05:45.313679   14328 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/offline-docker-657000/config.json: {Name:mk4a2a159c274f5e00d4f9df944c586ed4ee1356 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:05:45.313918   14328 start.go:360] acquireMachinesLock for offline-docker-657000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:05:45.313962   14328 start.go:364] duration metric: took 31.959µs to acquireMachinesLock for "offline-docker-657000"
	I0415 05:05:45.313975   14328 start.go:93] Provisioning new machine with config: &{Name:offline-docker-657000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-657000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:05:45.314020   14328 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:05:45.317347   14328 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0415 05:05:45.334376   14328 start.go:159] libmachine.API.Create for "offline-docker-657000" (driver="qemu2")
	I0415 05:05:45.334403   14328 client.go:168] LocalClient.Create starting
	I0415 05:05:45.334487   14328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:05:45.334523   14328 main.go:141] libmachine: Decoding PEM data...
	I0415 05:05:45.334536   14328 main.go:141] libmachine: Parsing certificate...
	I0415 05:05:45.334577   14328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:05:45.334599   14328 main.go:141] libmachine: Decoding PEM data...
	I0415 05:05:45.334606   14328 main.go:141] libmachine: Parsing certificate...
	I0415 05:05:45.335027   14328 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:05:45.550777   14328 main.go:141] libmachine: Creating SSH key...
	I0415 05:05:45.594187   14328 main.go:141] libmachine: Creating Disk image...
	I0415 05:05:45.594195   14328 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:05:45.594369   14328 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/offline-docker-657000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/offline-docker-657000/disk.qcow2
	I0415 05:05:45.607056   14328 main.go:141] libmachine: STDOUT: 
	I0415 05:05:45.607075   14328 main.go:141] libmachine: STDERR: 
	I0415 05:05:45.607127   14328 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/offline-docker-657000/disk.qcow2 +20000M
	I0415 05:05:45.617893   14328 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:05:45.617922   14328 main.go:141] libmachine: STDERR: 
	I0415 05:05:45.617937   14328 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/offline-docker-657000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/offline-docker-657000/disk.qcow2
	I0415 05:05:45.617942   14328 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:05:45.617972   14328 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/offline-docker-657000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/offline-docker-657000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/offline-docker-657000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:28:6a:2a:ea:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/offline-docker-657000/disk.qcow2
	I0415 05:05:45.619547   14328 main.go:141] libmachine: STDOUT: 
	I0415 05:05:45.619564   14328 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:05:45.619584   14328 client.go:171] duration metric: took 285.187959ms to LocalClient.Create
	I0415 05:05:47.621674   14328 start.go:128] duration metric: took 2.307727833s to createHost
	I0415 05:05:47.621723   14328 start.go:83] releasing machines lock for "offline-docker-657000", held for 2.307842416s
	W0415 05:05:47.621788   14328 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:05:47.644837   14328 out.go:177] * Deleting "offline-docker-657000" in qemu2 ...
	W0415 05:05:47.662083   14328 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:05:47.662104   14328 start.go:728] Will try again in 5 seconds ...
	I0415 05:05:52.664264   14328 start.go:360] acquireMachinesLock for offline-docker-657000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:05:52.664793   14328 start.go:364] duration metric: took 371.542µs to acquireMachinesLock for "offline-docker-657000"
	I0415 05:05:52.664958   14328 start.go:93] Provisioning new machine with config: &{Name:offline-docker-657000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-657000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:05:52.665200   14328 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:05:52.673749   14328 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0415 05:05:52.724436   14328 start.go:159] libmachine.API.Create for "offline-docker-657000" (driver="qemu2")
	I0415 05:05:52.724481   14328 client.go:168] LocalClient.Create starting
	I0415 05:05:52.724578   14328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:05:52.724636   14328 main.go:141] libmachine: Decoding PEM data...
	I0415 05:05:52.724655   14328 main.go:141] libmachine: Parsing certificate...
	I0415 05:05:52.724745   14328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:05:52.724788   14328 main.go:141] libmachine: Decoding PEM data...
	I0415 05:05:52.724800   14328 main.go:141] libmachine: Parsing certificate...
	I0415 05:05:52.725284   14328 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:05:52.857394   14328 main.go:141] libmachine: Creating SSH key...
	I0415 05:05:52.937569   14328 main.go:141] libmachine: Creating Disk image...
	I0415 05:05:52.937575   14328 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:05:52.937751   14328 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/offline-docker-657000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/offline-docker-657000/disk.qcow2
	I0415 05:05:52.950345   14328 main.go:141] libmachine: STDOUT: 
	I0415 05:05:52.950368   14328 main.go:141] libmachine: STDERR: 
	I0415 05:05:52.950415   14328 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/offline-docker-657000/disk.qcow2 +20000M
	I0415 05:05:52.961144   14328 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:05:52.961163   14328 main.go:141] libmachine: STDERR: 
	I0415 05:05:52.961173   14328 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/offline-docker-657000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/offline-docker-657000/disk.qcow2
	I0415 05:05:52.961179   14328 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:05:52.961206   14328 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/offline-docker-657000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/offline-docker-657000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/offline-docker-657000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:94:bc:cd:e2:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/offline-docker-657000/disk.qcow2
	I0415 05:05:52.962848   14328 main.go:141] libmachine: STDOUT: 
	I0415 05:05:52.962866   14328 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:05:52.962878   14328 client.go:171] duration metric: took 238.402875ms to LocalClient.Create
	I0415 05:05:54.964974   14328 start.go:128] duration metric: took 2.299817s to createHost
	I0415 05:05:54.965033   14328 start.go:83] releasing machines lock for "offline-docker-657000", held for 2.300295667s
	W0415 05:05:54.965401   14328 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-657000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-657000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:05:54.971991   14328 out.go:177] 
	W0415 05:05:54.977789   14328 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:05:54.977811   14328 out.go:239] * 
	* 
	W0415 05:05:54.980550   14328 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:05:54.989944   14328 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-657000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-04-15 05:05:55.008144 -0700 PDT m=+757.984901751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-657000 -n offline-docker-657000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-657000 -n offline-docker-657000: exit status 7 (68.434625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-657000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-657000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-657000
--- FAIL: TestOffline (10.04s)

                                                
                                    
x
+
TestAddons/Setup (9.92s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-891000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-891000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (9.918113292s)

                                                
                                                
-- stdout --
	* [addons-891000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-891000" primary control-plane node in "addons-891000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-891000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 04:54:08.713302   12873 out.go:291] Setting OutFile to fd 1 ...
	I0415 04:54:08.713434   12873 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:54:08.713437   12873 out.go:304] Setting ErrFile to fd 2...
	I0415 04:54:08.713439   12873 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:54:08.713571   12873 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 04:54:08.714730   12873 out.go:298] Setting JSON to false
	I0415 04:54:08.730632   12873 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6819,"bootTime":1713175229,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 04:54:08.730717   12873 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 04:54:08.735462   12873 out.go:177] * [addons-891000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 04:54:08.742529   12873 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 04:54:08.746476   12873 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 04:54:08.742576   12873 notify.go:220] Checking for updates...
	I0415 04:54:08.752479   12873 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 04:54:08.755427   12873 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 04:54:08.758428   12873 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 04:54:08.761495   12873 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 04:54:08.764640   12873 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 04:54:08.768423   12873 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 04:54:08.775423   12873 start.go:297] selected driver: qemu2
	I0415 04:54:08.775431   12873 start.go:901] validating driver "qemu2" against <nil>
	I0415 04:54:08.775437   12873 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 04:54:08.777672   12873 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 04:54:08.780471   12873 out.go:177] * Automatically selected the socket_vmnet network
	I0415 04:54:08.783528   12873 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 04:54:08.783566   12873 cni.go:84] Creating CNI manager for ""
	I0415 04:54:08.783573   12873 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 04:54:08.783577   12873 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 04:54:08.783605   12873 start.go:340] cluster config:
	{Name:addons-891000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-891000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 04:54:08.787793   12873 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 04:54:08.795464   12873 out.go:177] * Starting "addons-891000" primary control-plane node in "addons-891000" cluster
	I0415 04:54:08.799478   12873 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 04:54:08.799490   12873 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 04:54:08.799496   12873 cache.go:56] Caching tarball of preloaded images
	I0415 04:54:08.799550   12873 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 04:54:08.799555   12873 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 04:54:08.799734   12873 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/addons-891000/config.json ...
	I0415 04:54:08.799746   12873 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/addons-891000/config.json: {Name:mk26a6c1dd31d3b090b54fc66b2540b552f80fb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 04:54:08.800178   12873 start.go:360] acquireMachinesLock for addons-891000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 04:54:08.800238   12873 start.go:364] duration metric: took 54.041µs to acquireMachinesLock for "addons-891000"
	I0415 04:54:08.800249   12873 start.go:93] Provisioning new machine with config: &{Name:addons-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:addons-891000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 04:54:08.800274   12873 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 04:54:08.808481   12873 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0415 04:54:08.825669   12873 start.go:159] libmachine.API.Create for "addons-891000" (driver="qemu2")
	I0415 04:54:08.825696   12873 client.go:168] LocalClient.Create starting
	I0415 04:54:08.825835   12873 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 04:54:08.871703   12873 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 04:54:08.961716   12873 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 04:54:09.100034   12873 main.go:141] libmachine: Creating SSH key...
	I0415 04:54:09.177436   12873 main.go:141] libmachine: Creating Disk image...
	I0415 04:54:09.177441   12873 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 04:54:09.177629   12873 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/addons-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/addons-891000/disk.qcow2
	I0415 04:54:09.190102   12873 main.go:141] libmachine: STDOUT: 
	I0415 04:54:09.190125   12873 main.go:141] libmachine: STDERR: 
	I0415 04:54:09.190175   12873 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/addons-891000/disk.qcow2 +20000M
	I0415 04:54:09.201012   12873 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 04:54:09.201030   12873 main.go:141] libmachine: STDERR: 
	I0415 04:54:09.201049   12873 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/addons-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/addons-891000/disk.qcow2
	I0415 04:54:09.201054   12873 main.go:141] libmachine: Starting QEMU VM...
	I0415 04:54:09.201078   12873 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/addons-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/addons-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/addons-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:05:b2:12:41:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/addons-891000/disk.qcow2
	I0415 04:54:09.202707   12873 main.go:141] libmachine: STDOUT: 
	I0415 04:54:09.202722   12873 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 04:54:09.202741   12873 client.go:171] duration metric: took 377.049958ms to LocalClient.Create
	I0415 04:54:11.204852   12873 start.go:128] duration metric: took 2.404623375s to createHost
	I0415 04:54:11.204903   12873 start.go:83] releasing machines lock for "addons-891000", held for 2.404714167s
	W0415 04:54:11.204967   12873 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 04:54:11.215248   12873 out.go:177] * Deleting "addons-891000" in qemu2 ...
	W0415 04:54:11.237811   12873 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 04:54:11.237843   12873 start.go:728] Will try again in 5 seconds ...
	I0415 04:54:16.239946   12873 start.go:360] acquireMachinesLock for addons-891000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 04:54:16.240402   12873 start.go:364] duration metric: took 353.708µs to acquireMachinesLock for "addons-891000"
	I0415 04:54:16.240556   12873 start.go:93] Provisioning new machine with config: &{Name:addons-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:addons-891000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 04:54:16.240804   12873 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 04:54:16.251409   12873 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0415 04:54:16.303497   12873 start.go:159] libmachine.API.Create for "addons-891000" (driver="qemu2")
	I0415 04:54:16.303548   12873 client.go:168] LocalClient.Create starting
	I0415 04:54:16.303681   12873 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 04:54:16.303760   12873 main.go:141] libmachine: Decoding PEM data...
	I0415 04:54:16.303784   12873 main.go:141] libmachine: Parsing certificate...
	I0415 04:54:16.303882   12873 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 04:54:16.303940   12873 main.go:141] libmachine: Decoding PEM data...
	I0415 04:54:16.303955   12873 main.go:141] libmachine: Parsing certificate...
	I0415 04:54:16.304506   12873 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 04:54:16.437515   12873 main.go:141] libmachine: Creating SSH key...
	I0415 04:54:16.530730   12873 main.go:141] libmachine: Creating Disk image...
	I0415 04:54:16.530735   12873 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 04:54:16.530910   12873 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/addons-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/addons-891000/disk.qcow2
	I0415 04:54:16.543532   12873 main.go:141] libmachine: STDOUT: 
	I0415 04:54:16.543552   12873 main.go:141] libmachine: STDERR: 
	I0415 04:54:16.543621   12873 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/addons-891000/disk.qcow2 +20000M
	I0415 04:54:16.554648   12873 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 04:54:16.554678   12873 main.go:141] libmachine: STDERR: 
	I0415 04:54:16.554689   12873 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/addons-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/addons-891000/disk.qcow2
	I0415 04:54:16.554694   12873 main.go:141] libmachine: Starting QEMU VM...
	I0415 04:54:16.554730   12873 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/addons-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/addons-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/addons-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:ba:ee:47:91:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/addons-891000/disk.qcow2
	I0415 04:54:16.556438   12873 main.go:141] libmachine: STDOUT: 
	I0415 04:54:16.556466   12873 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 04:54:16.556480   12873 client.go:171] duration metric: took 252.93125ms to LocalClient.Create
	I0415 04:54:18.558663   12873 start.go:128] duration metric: took 2.317868708s to createHost
	I0415 04:54:18.558711   12873 start.go:83] releasing machines lock for "addons-891000", held for 2.3183435s
	W0415 04:54:18.558989   12873 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 04:54:18.569629   12873 out.go:177] 
	W0415 04:54:18.573682   12873 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 04:54:18.573714   12873 out.go:239] * 
	* 
	W0415 04:54:18.576463   12873 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 04:54:18.585637   12873 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-891000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (9.92s)

                                                
                                    
x
+
TestCertOptions (12.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-218000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-218000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (11.811140334s)

                                                
                                                
-- stdout --
	* [cert-options-218000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-218000" primary control-plane node in "cert-options-218000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-218000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-218000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-218000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-218000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (83.640708ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-218000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-218000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-218000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-218000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-218000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-218000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (43.131ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-218000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-218000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-218000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-218000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-218000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-04-15 05:06:44.885857 -0700 PDT m=+807.864691001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-218000 -n cert-options-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-218000 -n cert-options-218000: exit status 7 (31.944959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-218000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-218000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-218000
--- FAIL: TestCertOptions (12.10s)

                                                
                                    
x
+
TestCertExpiration (197.25s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-169000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-169000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (11.842301375s)

                                                
                                                
-- stdout --
	* [cert-expiration-169000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-169000" primary control-plane node in "cert-expiration-169000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-169000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-169000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-169000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-169000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-169000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.232332583s)

                                                
                                                
-- stdout --
	* [cert-expiration-169000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-169000" primary control-plane node in "cert-expiration-169000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-169000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-169000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-169000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-169000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-169000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-169000" primary control-plane node in "cert-expiration-169000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-169000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-169000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-169000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-04-15 05:09:47.520133 -0700 PDT m=+990.506568084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-169000 -n cert-expiration-169000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-169000 -n cert-expiration-169000: exit status 7 (70.384666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-169000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-169000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-169000
--- FAIL: TestCertExpiration (197.25s)

                                                
                                    
x
+
TestDockerFlags (12.18s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-299000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-299000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.883576417s)

                                                
                                                
-- stdout --
	* [docker-flags-299000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-299000" primary control-plane node in "docker-flags-299000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-299000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:06:20.771099   14604 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:06:20.771234   14604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:06:20.771239   14604 out.go:304] Setting ErrFile to fd 2...
	I0415 05:06:20.771242   14604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:06:20.771365   14604 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:06:20.772437   14604 out.go:298] Setting JSON to false
	I0415 05:06:20.788858   14604 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7551,"bootTime":1713175229,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:06:20.788929   14604 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:06:20.793196   14604 out.go:177] * [docker-flags-299000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:06:20.800285   14604 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:06:20.800314   14604 notify.go:220] Checking for updates...
	I0415 05:06:20.808225   14604 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:06:20.812209   14604 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:06:20.815158   14604 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:06:20.818243   14604 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:06:20.826225   14604 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:06:20.830503   14604 config.go:182] Loaded profile config "force-systemd-flag-287000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:06:20.830567   14604 config.go:182] Loaded profile config "multinode-671000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:06:20.830620   14604 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:06:20.833330   14604 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 05:06:20.837143   14604 start.go:297] selected driver: qemu2
	I0415 05:06:20.837149   14604 start.go:901] validating driver "qemu2" against <nil>
	I0415 05:06:20.837154   14604 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:06:20.839181   14604 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 05:06:20.842170   14604 out.go:177] * Automatically selected the socket_vmnet network
	I0415 05:06:20.846279   14604 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0415 05:06:20.846324   14604 cni.go:84] Creating CNI manager for ""
	I0415 05:06:20.846337   14604 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:06:20.846340   14604 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 05:06:20.846381   14604 start.go:340] cluster config:
	{Name:docker-flags-299000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-299000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:06:20.850652   14604 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:06:20.855247   14604 out.go:177] * Starting "docker-flags-299000" primary control-plane node in "docker-flags-299000" cluster
	I0415 05:06:20.863242   14604 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:06:20.863260   14604 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 05:06:20.863267   14604 cache.go:56] Caching tarball of preloaded images
	I0415 05:06:20.863338   14604 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:06:20.863343   14604 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:06:20.863403   14604 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/docker-flags-299000/config.json ...
	I0415 05:06:20.863417   14604 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/docker-flags-299000/config.json: {Name:mk7e2c5403bda6a685ce36156b10732761f5dbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:06:20.863818   14604 start.go:360] acquireMachinesLock for docker-flags-299000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:06:22.850879   14604 start.go:364] duration metric: took 1.987088125s to acquireMachinesLock for "docker-flags-299000"
	I0415 05:06:22.851025   14604 start.go:93] Provisioning new machine with config: &{Name:docker-flags-299000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-299000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:06:22.851231   14604 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:06:22.859915   14604 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0415 05:06:22.907599   14604 start.go:159] libmachine.API.Create for "docker-flags-299000" (driver="qemu2")
	I0415 05:06:22.907653   14604 client.go:168] LocalClient.Create starting
	I0415 05:06:22.907792   14604 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:06:22.907858   14604 main.go:141] libmachine: Decoding PEM data...
	I0415 05:06:22.907873   14604 main.go:141] libmachine: Parsing certificate...
	I0415 05:06:22.907948   14604 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:06:22.907995   14604 main.go:141] libmachine: Decoding PEM data...
	I0415 05:06:22.908013   14604 main.go:141] libmachine: Parsing certificate...
	I0415 05:06:22.908681   14604 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:06:23.053795   14604 main.go:141] libmachine: Creating SSH key...
	I0415 05:06:23.134205   14604 main.go:141] libmachine: Creating Disk image...
	I0415 05:06:23.134210   14604 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:06:23.134406   14604 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/docker-flags-299000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/docker-flags-299000/disk.qcow2
	I0415 05:06:23.146829   14604 main.go:141] libmachine: STDOUT: 
	I0415 05:06:23.146853   14604 main.go:141] libmachine: STDERR: 
	I0415 05:06:23.146908   14604 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/docker-flags-299000/disk.qcow2 +20000M
	I0415 05:06:23.157964   14604 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:06:23.157979   14604 main.go:141] libmachine: STDERR: 
	I0415 05:06:23.158002   14604 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/docker-flags-299000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/docker-flags-299000/disk.qcow2
	I0415 05:06:23.158006   14604 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:06:23.158034   14604 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/docker-flags-299000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/docker-flags-299000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/docker-flags-299000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:0b:13:de:79:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/docker-flags-299000/disk.qcow2
	I0415 05:06:23.159761   14604 main.go:141] libmachine: STDOUT: 
	I0415 05:06:23.159783   14604 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:06:23.159803   14604 client.go:171] duration metric: took 252.151ms to LocalClient.Create
	I0415 05:06:25.162001   14604 start.go:128] duration metric: took 2.310822083s to createHost
	I0415 05:06:25.162100   14604 start.go:83] releasing machines lock for "docker-flags-299000", held for 2.311266625s
	W0415 05:06:25.162158   14604 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:06:25.175503   14604 out.go:177] * Deleting "docker-flags-299000" in qemu2 ...
	W0415 05:06:25.199950   14604 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:06:25.199972   14604 start.go:728] Will try again in 5 seconds ...
	I0415 05:06:30.200084   14604 start.go:360] acquireMachinesLock for docker-flags-299000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:06:30.200429   14604 start.go:364] duration metric: took 209.875µs to acquireMachinesLock for "docker-flags-299000"
	I0415 05:06:30.200537   14604 start.go:93] Provisioning new machine with config: &{Name:docker-flags-299000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-299000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:06:30.200821   14604 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:06:30.208490   14604 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0415 05:06:30.244652   14604 start.go:159] libmachine.API.Create for "docker-flags-299000" (driver="qemu2")
	I0415 05:06:30.244692   14604 client.go:168] LocalClient.Create starting
	I0415 05:06:30.244791   14604 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:06:30.244841   14604 main.go:141] libmachine: Decoding PEM data...
	I0415 05:06:30.244852   14604 main.go:141] libmachine: Parsing certificate...
	I0415 05:06:30.244909   14604 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:06:30.244933   14604 main.go:141] libmachine: Decoding PEM data...
	I0415 05:06:30.244944   14604 main.go:141] libmachine: Parsing certificate...
	I0415 05:06:30.245389   14604 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:06:30.443650   14604 main.go:141] libmachine: Creating SSH key...
	I0415 05:06:30.553891   14604 main.go:141] libmachine: Creating Disk image...
	I0415 05:06:30.553899   14604 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:06:30.554067   14604 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/docker-flags-299000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/docker-flags-299000/disk.qcow2
	I0415 05:06:30.570185   14604 main.go:141] libmachine: STDOUT: 
	I0415 05:06:30.570215   14604 main.go:141] libmachine: STDERR: 
	I0415 05:06:30.570271   14604 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/docker-flags-299000/disk.qcow2 +20000M
	I0415 05:06:30.581372   14604 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:06:30.581389   14604 main.go:141] libmachine: STDERR: 
	I0415 05:06:30.581400   14604 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/docker-flags-299000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/docker-flags-299000/disk.qcow2
	I0415 05:06:30.581404   14604 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:06:30.581435   14604 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/docker-flags-299000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/docker-flags-299000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/docker-flags-299000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:32:34:03:4d:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/docker-flags-299000/disk.qcow2
	I0415 05:06:30.583065   14604 main.go:141] libmachine: STDOUT: 
	I0415 05:06:30.583080   14604 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:06:30.583093   14604 client.go:171] duration metric: took 338.411166ms to LocalClient.Create
	I0415 05:06:32.585185   14604 start.go:128] duration metric: took 2.384434917s to createHost
	I0415 05:06:32.585282   14604 start.go:83] releasing machines lock for "docker-flags-299000", held for 2.384934584s
	W0415 05:06:32.585628   14604 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-299000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-299000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:06:32.598208   14604 out.go:177] 
	W0415 05:06:32.601255   14604 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:06:32.601292   14604 out.go:239] * 
	* 
	W0415 05:06:32.603883   14604 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:06:32.611171   14604 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-299000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-299000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-299000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (76.831167ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-299000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-299000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-299000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-299000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-299000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-299000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-299000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-299000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-299000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (70.863583ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-299000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-299000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-299000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-299000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-299000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-299000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-04-15 05:06:32.773591 -0700 PDT m=+795.751920167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-299000 -n docker-flags-299000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-299000 -n docker-flags-299000: exit status 7 (35.9815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-299000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-299000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-299000
--- FAIL: TestDockerFlags (12.18s)

                                                
                                    
x
+
TestForceSystemdFlag (12.13s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-287000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-287000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.871265583s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-287000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-287000" primary control-plane node in "force-systemd-flag-287000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-287000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:06:18.352127   14586 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:06:18.352261   14586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:06:18.352264   14586 out.go:304] Setting ErrFile to fd 2...
	I0415 05:06:18.352266   14586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:06:18.352391   14586 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:06:18.353474   14586 out.go:298] Setting JSON to false
	I0415 05:06:18.369878   14586 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7549,"bootTime":1713175229,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:06:18.369964   14586 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:06:18.375103   14586 out.go:177] * [force-systemd-flag-287000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:06:18.381270   14586 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:06:18.381299   14586 notify.go:220] Checking for updates...
	I0415 05:06:18.388247   14586 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:06:18.391239   14586 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:06:18.392416   14586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:06:18.395192   14586 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:06:18.402229   14586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:06:18.406511   14586 config.go:182] Loaded profile config "force-systemd-env-608000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:06:18.406575   14586 config.go:182] Loaded profile config "multinode-671000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:06:18.406623   14586 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:06:18.411229   14586 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 05:06:18.416230   14586 start.go:297] selected driver: qemu2
	I0415 05:06:18.416236   14586 start.go:901] validating driver "qemu2" against <nil>
	I0415 05:06:18.416241   14586 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:06:18.418351   14586 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 05:06:18.421240   14586 out.go:177] * Automatically selected the socket_vmnet network
	I0415 05:06:18.424307   14586 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 05:06:18.424323   14586 cni.go:84] Creating CNI manager for ""
	I0415 05:06:18.424328   14586 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:06:18.424334   14586 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 05:06:18.424355   14586 start.go:340] cluster config:
	{Name:force-systemd-flag-287000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-287000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:06:18.428426   14586 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:06:18.435238   14586 out.go:177] * Starting "force-systemd-flag-287000" primary control-plane node in "force-systemd-flag-287000" cluster
	I0415 05:06:18.439214   14586 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:06:18.439226   14586 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 05:06:18.439232   14586 cache.go:56] Caching tarball of preloaded images
	I0415 05:06:18.439282   14586 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:06:18.439286   14586 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:06:18.439349   14586 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/force-systemd-flag-287000/config.json ...
	I0415 05:06:18.439360   14586 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/force-systemd-flag-287000/config.json: {Name:mk51e55719e2297a936c7c79e27e7aff5a84b93f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:06:18.439624   14586 start.go:360] acquireMachinesLock for force-systemd-flag-287000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:06:20.481398   14586 start.go:364] duration metric: took 2.041833958s to acquireMachinesLock for "force-systemd-flag-287000"
	I0415 05:06:20.481619   14586 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-287000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-287000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:06:20.481805   14586 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:06:20.489295   14586 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0415 05:06:20.534649   14586 start.go:159] libmachine.API.Create for "force-systemd-flag-287000" (driver="qemu2")
	I0415 05:06:20.534703   14586 client.go:168] LocalClient.Create starting
	I0415 05:06:20.534853   14586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:06:20.534918   14586 main.go:141] libmachine: Decoding PEM data...
	I0415 05:06:20.534932   14586 main.go:141] libmachine: Parsing certificate...
	I0415 05:06:20.535002   14586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:06:20.535049   14586 main.go:141] libmachine: Decoding PEM data...
	I0415 05:06:20.535064   14586 main.go:141] libmachine: Parsing certificate...
	I0415 05:06:20.535695   14586 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:06:20.692117   14586 main.go:141] libmachine: Creating SSH key...
	I0415 05:06:20.804842   14586 main.go:141] libmachine: Creating Disk image...
	I0415 05:06:20.804851   14586 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:06:20.805056   14586 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-flag-287000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-flag-287000/disk.qcow2
	I0415 05:06:20.830989   14586 main.go:141] libmachine: STDOUT: 
	I0415 05:06:20.831013   14586 main.go:141] libmachine: STDERR: 
	I0415 05:06:20.831058   14586 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-flag-287000/disk.qcow2 +20000M
	I0415 05:06:20.846716   14586 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:06:20.846739   14586 main.go:141] libmachine: STDERR: 
	I0415 05:06:20.846762   14586 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-flag-287000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-flag-287000/disk.qcow2
	I0415 05:06:20.846769   14586 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:06:20.846814   14586 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-flag-287000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-flag-287000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-flag-287000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:66:0a:79:48:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-flag-287000/disk.qcow2
	I0415 05:06:20.848535   14586 main.go:141] libmachine: STDOUT: 
	I0415 05:06:20.848554   14586 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:06:20.848573   14586 client.go:171] duration metric: took 313.877041ms to LocalClient.Create
	I0415 05:06:22.850667   14586 start.go:128] duration metric: took 2.36892775s to createHost
	I0415 05:06:22.850731   14586 start.go:83] releasing machines lock for "force-systemd-flag-287000", held for 2.369363958s
	W0415 05:06:22.850833   14586 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:06:22.867852   14586 out.go:177] * Deleting "force-systemd-flag-287000" in qemu2 ...
	W0415 05:06:22.883955   14586 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:06:22.883977   14586 start.go:728] Will try again in 5 seconds ...
	I0415 05:06:27.886033   14586 start.go:360] acquireMachinesLock for force-systemd-flag-287000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:06:27.886462   14586 start.go:364] duration metric: took 312.208µs to acquireMachinesLock for "force-systemd-flag-287000"
	I0415 05:06:27.886599   14586 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-287000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-287000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:06:27.886897   14586 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:06:27.892603   14586 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0415 05:06:27.941498   14586 start.go:159] libmachine.API.Create for "force-systemd-flag-287000" (driver="qemu2")
	I0415 05:06:27.941540   14586 client.go:168] LocalClient.Create starting
	I0415 05:06:27.941645   14586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:06:27.941730   14586 main.go:141] libmachine: Decoding PEM data...
	I0415 05:06:27.941745   14586 main.go:141] libmachine: Parsing certificate...
	I0415 05:06:27.941805   14586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:06:27.941848   14586 main.go:141] libmachine: Decoding PEM data...
	I0415 05:06:27.941863   14586 main.go:141] libmachine: Parsing certificate...
	I0415 05:06:27.942564   14586 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:06:28.074809   14586 main.go:141] libmachine: Creating SSH key...
	I0415 05:06:28.124797   14586 main.go:141] libmachine: Creating Disk image...
	I0415 05:06:28.124802   14586 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:06:28.124974   14586 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-flag-287000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-flag-287000/disk.qcow2
	I0415 05:06:28.137621   14586 main.go:141] libmachine: STDOUT: 
	I0415 05:06:28.137636   14586 main.go:141] libmachine: STDERR: 
	I0415 05:06:28.137694   14586 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-flag-287000/disk.qcow2 +20000M
	I0415 05:06:28.148516   14586 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:06:28.148629   14586 main.go:141] libmachine: STDERR: 
	I0415 05:06:28.148644   14586 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-flag-287000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-flag-287000/disk.qcow2
	I0415 05:06:28.148650   14586 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:06:28.148684   14586 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-flag-287000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-flag-287000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-flag-287000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:96:37:73:94:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-flag-287000/disk.qcow2
	I0415 05:06:28.150372   14586 main.go:141] libmachine: STDOUT: 
	I0415 05:06:28.150424   14586 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:06:28.150443   14586 client.go:171] duration metric: took 208.906625ms to LocalClient.Create
	I0415 05:06:30.152566   14586 start.go:128] duration metric: took 2.265696666s to createHost
	I0415 05:06:30.152615   14586 start.go:83] releasing machines lock for "force-systemd-flag-287000", held for 2.266218375s
	W0415 05:06:30.152952   14586 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-287000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-287000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:06:30.162505   14586 out.go:177] 
	W0415 05:06:30.166579   14586 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:06:30.166619   14586 out.go:239] * 
	* 
	W0415 05:06:30.169003   14586 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:06:30.177516   14586 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-287000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-287000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-287000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (86.502625ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-287000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-287000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-287000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-04-15 05:06:30.282786 -0700 PDT m=+793.261012042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-287000 -n force-systemd-flag-287000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-287000 -n force-systemd-flag-287000: exit status 7 (39.174375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-287000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-287000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-287000
--- FAIL: TestForceSystemdFlag (12.13s)

                                                
                                    
x
+
TestForceSystemdEnv (10.25s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-608000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-608000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.02968425s)

                                                
                                                
-- stdout --
	* [force-systemd-env-608000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-608000" primary control-plane node in "force-systemd-env-608000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-608000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:06:10.519957   14555 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:06:10.520097   14555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:06:10.520103   14555 out.go:304] Setting ErrFile to fd 2...
	I0415 05:06:10.520105   14555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:06:10.520230   14555 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:06:10.521339   14555 out.go:298] Setting JSON to false
	I0415 05:06:10.537832   14555 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7541,"bootTime":1713175229,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:06:10.537899   14555 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:06:10.548287   14555 out.go:177] * [force-systemd-env-608000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:06:10.555243   14555 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:06:10.551223   14555 notify.go:220] Checking for updates...
	I0415 05:06:10.569207   14555 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:06:10.577226   14555 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:06:10.580240   14555 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:06:10.583252   14555 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:06:10.586284   14555 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0415 05:06:10.589607   14555 config.go:182] Loaded profile config "NoKubernetes-683000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0415 05:06:10.589679   14555 config.go:182] Loaded profile config "multinode-671000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:06:10.589729   14555 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:06:10.593234   14555 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 05:06:10.599206   14555 start.go:297] selected driver: qemu2
	I0415 05:06:10.599211   14555 start.go:901] validating driver "qemu2" against <nil>
	I0415 05:06:10.599216   14555 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:06:10.601419   14555 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 05:06:10.604207   14555 out.go:177] * Automatically selected the socket_vmnet network
	I0415 05:06:10.607328   14555 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 05:06:10.607356   14555 cni.go:84] Creating CNI manager for ""
	I0415 05:06:10.607362   14555 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:06:10.607370   14555 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 05:06:10.607406   14555 start.go:340] cluster config:
	{Name:force-systemd-env-608000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:06:10.611255   14555 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:06:10.615214   14555 out.go:177] * Starting "force-systemd-env-608000" primary control-plane node in "force-systemd-env-608000" cluster
	I0415 05:06:10.622233   14555 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:06:10.622254   14555 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 05:06:10.622262   14555 cache.go:56] Caching tarball of preloaded images
	I0415 05:06:10.622340   14555 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:06:10.622345   14555 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:06:10.622393   14555 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/force-systemd-env-608000/config.json ...
	I0415 05:06:10.622403   14555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/force-systemd-env-608000/config.json: {Name:mk55e9ea14ba9dc231248595f2a6a34872732b9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:06:10.626937   14555 start.go:360] acquireMachinesLock for force-systemd-env-608000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:06:10.626975   14555 start.go:364] duration metric: took 31µs to acquireMachinesLock for "force-systemd-env-608000"
	I0415 05:06:10.626990   14555 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-608000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:06:10.627013   14555 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:06:10.634264   14555 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0415 05:06:10.649482   14555 start.go:159] libmachine.API.Create for "force-systemd-env-608000" (driver="qemu2")
	I0415 05:06:10.649510   14555 client.go:168] LocalClient.Create starting
	I0415 05:06:10.649565   14555 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:06:10.649603   14555 main.go:141] libmachine: Decoding PEM data...
	I0415 05:06:10.649626   14555 main.go:141] libmachine: Parsing certificate...
	I0415 05:06:10.649662   14555 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:06:10.649689   14555 main.go:141] libmachine: Decoding PEM data...
	I0415 05:06:10.649696   14555 main.go:141] libmachine: Parsing certificate...
	I0415 05:06:10.652800   14555 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:06:10.834126   14555 main.go:141] libmachine: Creating SSH key...
	I0415 05:06:10.964889   14555 main.go:141] libmachine: Creating Disk image...
	I0415 05:06:10.964895   14555 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:06:10.965068   14555 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-env-608000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-env-608000/disk.qcow2
	I0415 05:06:10.977681   14555 main.go:141] libmachine: STDOUT: 
	I0415 05:06:10.977700   14555 main.go:141] libmachine: STDERR: 
	I0415 05:06:10.977761   14555 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-env-608000/disk.qcow2 +20000M
	I0415 05:06:10.988758   14555 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:06:10.988773   14555 main.go:141] libmachine: STDERR: 
	I0415 05:06:10.988788   14555 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-env-608000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-env-608000/disk.qcow2
	I0415 05:06:10.988792   14555 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:06:10.988821   14555 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-env-608000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-env-608000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-env-608000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:6c:cc:9f:32:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-env-608000/disk.qcow2
	I0415 05:06:10.990509   14555 main.go:141] libmachine: STDOUT: 
	I0415 05:06:10.990523   14555 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:06:10.990543   14555 client.go:171] duration metric: took 341.042416ms to LocalClient.Create
	I0415 05:06:12.992763   14555 start.go:128] duration metric: took 2.365806833s to createHost
	I0415 05:06:12.992863   14555 start.go:83] releasing machines lock for "force-systemd-env-608000", held for 2.365973667s
	W0415 05:06:12.992923   14555 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:06:13.016467   14555 out.go:177] * Deleting "force-systemd-env-608000" in qemu2 ...
	W0415 05:06:13.042452   14555 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:06:13.042487   14555 start.go:728] Will try again in 5 seconds ...
	I0415 05:06:18.043999   14555 start.go:360] acquireMachinesLock for force-systemd-env-608000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:06:18.057356   14555 start.go:364] duration metric: took 13.287791ms to acquireMachinesLock for "force-systemd-env-608000"
	I0415 05:06:18.057417   14555 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-608000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:06:18.057600   14555 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:06:18.070282   14555 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0415 05:06:18.116612   14555 start.go:159] libmachine.API.Create for "force-systemd-env-608000" (driver="qemu2")
	I0415 05:06:18.116678   14555 client.go:168] LocalClient.Create starting
	I0415 05:06:18.116779   14555 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:06:18.116839   14555 main.go:141] libmachine: Decoding PEM data...
	I0415 05:06:18.116862   14555 main.go:141] libmachine: Parsing certificate...
	I0415 05:06:18.116930   14555 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:06:18.116974   14555 main.go:141] libmachine: Decoding PEM data...
	I0415 05:06:18.116989   14555 main.go:141] libmachine: Parsing certificate...
	I0415 05:06:18.117501   14555 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:06:18.285218   14555 main.go:141] libmachine: Creating SSH key...
	I0415 05:06:18.453653   14555 main.go:141] libmachine: Creating Disk image...
	I0415 05:06:18.453661   14555 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:06:18.453818   14555 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-env-608000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-env-608000/disk.qcow2
	I0415 05:06:18.466190   14555 main.go:141] libmachine: STDOUT: 
	I0415 05:06:18.466213   14555 main.go:141] libmachine: STDERR: 
	I0415 05:06:18.466263   14555 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-env-608000/disk.qcow2 +20000M
	I0415 05:06:18.477221   14555 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:06:18.477240   14555 main.go:141] libmachine: STDERR: 
	I0415 05:06:18.477253   14555 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-env-608000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-env-608000/disk.qcow2
	I0415 05:06:18.477258   14555 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:06:18.477312   14555 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-env-608000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-env-608000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-env-608000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:6a:56:e3:ac:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/force-systemd-env-608000/disk.qcow2
	I0415 05:06:18.479048   14555 main.go:141] libmachine: STDOUT: 
	I0415 05:06:18.479069   14555 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:06:18.479079   14555 client.go:171] duration metric: took 362.41025ms to LocalClient.Create
	I0415 05:06:20.481175   14555 start.go:128] duration metric: took 2.423646333s to createHost
	I0415 05:06:20.481224   14555 start.go:83] releasing machines lock for "force-systemd-env-608000", held for 2.423942667s
	W0415 05:06:20.481586   14555 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-608000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-608000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:06:20.498267   14555 out.go:177] 
	W0415 05:06:20.502313   14555 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:06:20.502340   14555 out.go:239] * 
	* 
	W0415 05:06:20.503886   14555 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:06:20.511250   14555 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-608000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-608000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-608000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (70.652875ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-608000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-608000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-608000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-04-15 05:06:20.59363 -0700 PDT m=+783.571452792
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-608000 -n force-systemd-env-608000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-608000 -n force-systemd-env-608000: exit status 7 (37.182125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-608000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-608000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-608000
--- FAIL: TestForceSystemdEnv (10.25s)

                                                
                                    
x
+
TestErrorSpam/setup (9.77s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-668000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-668000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 --driver=qemu2 : exit status 80 (9.765845125s)

                                                
                                                
-- stdout --
	* [nospam-668000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-668000" primary control-plane node in "nospam-668000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-668000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-668000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-668000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-668000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-668000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=18644
- KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-668000" primary control-plane node in "nospam-668000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-668000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-668000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.77s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.9s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-424000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-424000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.825624083s)

                                                
                                                
-- stdout --
	* [functional-424000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-424000" primary control-plane node in "functional-424000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-424000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52033 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52033 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52033 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-424000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-424000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-424000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=18644
- KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-424000" primary control-plane node in "functional-424000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-424000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:52033 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:52033 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:52033 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-424000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000: exit status 7 (69.193334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.90s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-424000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-424000 --alsologtostderr -v=8: exit status 80 (5.188128s)

                                                
                                                
-- stdout --
	* [functional-424000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-424000" primary control-plane node in "functional-424000" cluster
	* Restarting existing qemu2 VM for "functional-424000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-424000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 04:54:47.521547   13018 out.go:291] Setting OutFile to fd 1 ...
	I0415 04:54:47.521677   13018 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:54:47.521687   13018 out.go:304] Setting ErrFile to fd 2...
	I0415 04:54:47.521690   13018 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:54:47.521826   13018 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 04:54:47.522782   13018 out.go:298] Setting JSON to false
	I0415 04:54:47.538786   13018 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6858,"bootTime":1713175229,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 04:54:47.538851   13018 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 04:54:47.544316   13018 out.go:177] * [functional-424000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 04:54:47.551109   13018 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 04:54:47.555216   13018 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 04:54:47.551151   13018 notify.go:220] Checking for updates...
	I0415 04:54:47.559523   13018 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 04:54:47.562237   13018 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 04:54:47.565277   13018 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 04:54:47.568234   13018 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 04:54:47.571504   13018 config.go:182] Loaded profile config "functional-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 04:54:47.571559   13018 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 04:54:47.576236   13018 out.go:177] * Using the qemu2 driver based on existing profile
	I0415 04:54:47.583149   13018 start.go:297] selected driver: qemu2
	I0415 04:54:47.583157   13018 start.go:901] validating driver "qemu2" against &{Name:functional-424000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-424000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 04:54:47.583208   13018 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 04:54:47.585512   13018 cni.go:84] Creating CNI manager for ""
	I0415 04:54:47.585526   13018 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 04:54:47.585573   13018 start.go:340] cluster config:
	{Name:functional-424000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-424000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 04:54:47.589788   13018 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 04:54:47.597189   13018 out.go:177] * Starting "functional-424000" primary control-plane node in "functional-424000" cluster
	I0415 04:54:47.601214   13018 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 04:54:47.601231   13018 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 04:54:47.601239   13018 cache.go:56] Caching tarball of preloaded images
	I0415 04:54:47.601303   13018 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 04:54:47.601309   13018 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 04:54:47.601384   13018 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/functional-424000/config.json ...
	I0415 04:54:47.601861   13018 start.go:360] acquireMachinesLock for functional-424000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 04:54:47.601889   13018 start.go:364] duration metric: took 21.208µs to acquireMachinesLock for "functional-424000"
	I0415 04:54:47.601897   13018 start.go:96] Skipping create...Using existing machine configuration
	I0415 04:54:47.601903   13018 fix.go:54] fixHost starting: 
	I0415 04:54:47.602020   13018 fix.go:112] recreateIfNeeded on functional-424000: state=Stopped err=<nil>
	W0415 04:54:47.602028   13018 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 04:54:47.610190   13018 out.go:177] * Restarting existing qemu2 VM for "functional-424000" ...
	I0415 04:54:47.614239   13018 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:7c:1b:66:75:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/disk.qcow2
	I0415 04:54:47.616218   13018 main.go:141] libmachine: STDOUT: 
	I0415 04:54:47.616239   13018 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 04:54:47.616265   13018 fix.go:56] duration metric: took 14.36325ms for fixHost
	I0415 04:54:47.616268   13018 start.go:83] releasing machines lock for "functional-424000", held for 14.376167ms
	W0415 04:54:47.616277   13018 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 04:54:47.616310   13018 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 04:54:47.616315   13018 start.go:728] Will try again in 5 seconds ...
	I0415 04:54:52.618355   13018 start.go:360] acquireMachinesLock for functional-424000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 04:54:52.618771   13018 start.go:364] duration metric: took 335µs to acquireMachinesLock for "functional-424000"
	I0415 04:54:52.618906   13018 start.go:96] Skipping create...Using existing machine configuration
	I0415 04:54:52.618927   13018 fix.go:54] fixHost starting: 
	I0415 04:54:52.619701   13018 fix.go:112] recreateIfNeeded on functional-424000: state=Stopped err=<nil>
	W0415 04:54:52.619726   13018 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 04:54:52.624166   13018 out.go:177] * Restarting existing qemu2 VM for "functional-424000" ...
	I0415 04:54:52.634302   13018 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:7c:1b:66:75:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/disk.qcow2
	I0415 04:54:52.644468   13018 main.go:141] libmachine: STDOUT: 
	I0415 04:54:52.644542   13018 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 04:54:52.644615   13018 fix.go:56] duration metric: took 25.692875ms for fixHost
	I0415 04:54:52.644633   13018 start.go:83] releasing machines lock for "functional-424000", held for 25.839291ms
	W0415 04:54:52.644853   13018 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-424000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-424000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 04:54:52.652089   13018 out.go:177] 
	W0415 04:54:52.656151   13018 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 04:54:52.656175   13018 out.go:239] * 
	* 
	W0415 04:54:52.658964   13018 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 04:54:52.665172   13018 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-424000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.189988834s for "functional-424000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000: exit status 7 (72.736916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (29.0005ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-424000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000: exit status 7 (32.538166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-424000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-424000 get po -A: exit status 1 (26.874625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-424000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-424000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-424000\n"*: args "kubectl --context functional-424000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-424000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000: exit status 7 (32.139416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh sudo crictl images: exit status 83 (45.006208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-424000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (42.652208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-424000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (44.865791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (43.932875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-424000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 kubectl -- --context functional-424000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 kubectl -- --context functional-424000 get pods: exit status 1 (655.472833ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-424000
	* no server found for cluster "functional-424000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-424000 kubectl -- --context functional-424000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000: exit status 7 (34.555875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.69s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-424000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-424000 get pods: exit status 1 (906.55225ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-424000
	* no server found for cluster "functional-424000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-424000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000: exit status 7 (31.88075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.94s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-424000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-424000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.191975541s)

                                                
                                                
-- stdout --
	* [functional-424000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-424000" primary control-plane node in "functional-424000" cluster
	* Restarting existing qemu2 VM for "functional-424000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-424000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-424000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-424000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.1925735s for "functional-424000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000: exit status 7 (70.23275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-424000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-424000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (27.940459ms)

                                                
                                                
** stderr ** 
	error: context "functional-424000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-424000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000: exit status 7 (32.331792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 logs: exit status 83 (79.733708ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-387000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT |                     |
	|         | -p download-only-387000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT | 15 Apr 24 04:53 PDT |
	| delete  | -p download-only-387000                                                  | download-only-387000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT | 15 Apr 24 04:53 PDT |
	| start   | -o=json --download-only                                                  | download-only-219000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT |                     |
	|         | -p download-only-219000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                                             |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT | 15 Apr 24 04:53 PDT |
	| delete  | -p download-only-219000                                                  | download-only-219000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT | 15 Apr 24 04:53 PDT |
	| start   | -o=json --download-only                                                  | download-only-370000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT |                     |
	|         | -p download-only-370000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                                        |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
	| delete  | -p download-only-370000                                                  | download-only-370000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
	| delete  | -p download-only-387000                                                  | download-only-387000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
	| delete  | -p download-only-219000                                                  | download-only-219000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
	| delete  | -p download-only-370000                                                  | download-only-370000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
	| start   | --download-only -p                                                       | binary-mirror-655000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
	|         | binary-mirror-655000                                                     |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
	|         | --binary-mirror                                                          |                      |         |                |                     |                     |
	|         | http://127.0.0.1:52001                                                   |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-655000                                                  | binary-mirror-655000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
	| addons  | enable dashboard -p                                                      | addons-891000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
	|         | addons-891000                                                            |                      |         |                |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-891000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
	|         | addons-891000                                                            |                      |         |                |                     |                     |
	| start   | -p addons-891000 --wait=true                                             | addons-891000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
	|         | --addons=registry                                                        |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
	|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
	| delete  | -p addons-891000                                                         | addons-891000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
	| start   | -p nospam-668000 -n=1 --memory=2250 --wait=false                         | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
	|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| start   | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| start   | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| start   | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| pause   | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| pause   | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| pause   | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| unpause | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| unpause | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| unpause | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| stop    | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| stop    | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| stop    | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| delete  | -p nospam-668000                                                         | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
	| start   | -p functional-424000                                                     | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
	|         | --memory=4000                                                            |                      |         |                |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
	| start   | -p functional-424000                                                     | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
	| cache   | functional-424000 cache add                                              | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
	| cache   | functional-424000 cache add                                              | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
	| cache   | functional-424000 cache add                                              | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | functional-424000 cache add                                              | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
	|         | minikube-local-cache-test:functional-424000                              |                      |         |                |                     |                     |
	| cache   | functional-424000 cache delete                                           | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
	|         | minikube-local-cache-test:functional-424000                              |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
	| ssh     | functional-424000 ssh sudo                                               | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
	|         | crictl images                                                            |                      |         |                |                     |                     |
	| ssh     | functional-424000                                                        | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| ssh     | functional-424000 ssh                                                    | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | functional-424000 cache reload                                           | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
	| ssh     | functional-424000 ssh                                                    | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| kubectl | functional-424000 kubectl --                                             | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
	|         | --context functional-424000                                              |                      |         |                |                     |                     |
	|         | get pods                                                                 |                      |         |                |                     |                     |
	| start   | -p functional-424000                                                     | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
	|         | --wait=all                                                               |                      |         |                |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 04:54:57
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 04:54:57.815469   13097 out.go:291] Setting OutFile to fd 1 ...
	I0415 04:54:57.815608   13097 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:54:57.815610   13097 out.go:304] Setting ErrFile to fd 2...
	I0415 04:54:57.815612   13097 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:54:57.815730   13097 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 04:54:57.816746   13097 out.go:298] Setting JSON to false
	I0415 04:54:57.832939   13097 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6868,"bootTime":1713175229,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 04:54:57.832999   13097 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 04:54:57.838155   13097 out.go:177] * [functional-424000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 04:54:57.846153   13097 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 04:54:57.846208   13097 notify.go:220] Checking for updates...
	I0415 04:54:57.850046   13097 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 04:54:57.854132   13097 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 04:54:57.857080   13097 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 04:54:57.859988   13097 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 04:54:57.862991   13097 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 04:54:57.866373   13097 config.go:182] Loaded profile config "functional-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 04:54:57.866419   13097 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 04:54:57.871022   13097 out.go:177] * Using the qemu2 driver based on existing profile
	I0415 04:54:57.878076   13097 start.go:297] selected driver: qemu2
	I0415 04:54:57.878080   13097 start.go:901] validating driver "qemu2" against &{Name:functional-424000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-424000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 04:54:57.878122   13097 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 04:54:57.880324   13097 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 04:54:57.880366   13097 cni.go:84] Creating CNI manager for ""
	I0415 04:54:57.880372   13097 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 04:54:57.880414   13097 start.go:340] cluster config:
	{Name:functional-424000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-424000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 04:54:57.884517   13097 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 04:54:57.893094   13097 out.go:177] * Starting "functional-424000" primary control-plane node in "functional-424000" cluster
	I0415 04:54:57.897944   13097 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 04:54:57.897958   13097 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 04:54:57.897965   13097 cache.go:56] Caching tarball of preloaded images
	I0415 04:54:57.898037   13097 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 04:54:57.898041   13097 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 04:54:57.898099   13097 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/functional-424000/config.json ...
	I0415 04:54:57.898703   13097 start.go:360] acquireMachinesLock for functional-424000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 04:54:57.898734   13097 start.go:364] duration metric: took 27.083µs to acquireMachinesLock for "functional-424000"
	I0415 04:54:57.898741   13097 start.go:96] Skipping create...Using existing machine configuration
	I0415 04:54:57.898746   13097 fix.go:54] fixHost starting: 
	I0415 04:54:57.898853   13097 fix.go:112] recreateIfNeeded on functional-424000: state=Stopped err=<nil>
	W0415 04:54:57.898862   13097 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 04:54:57.909113   13097 out.go:177] * Restarting existing qemu2 VM for "functional-424000" ...
	I0415 04:54:57.914148   13097 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:7c:1b:66:75:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/disk.qcow2
	I0415 04:54:57.916217   13097 main.go:141] libmachine: STDOUT: 
	I0415 04:54:57.916240   13097 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 04:54:57.916261   13097 fix.go:56] duration metric: took 17.516333ms for fixHost
	I0415 04:54:57.916264   13097 start.go:83] releasing machines lock for "functional-424000", held for 17.527083ms
	W0415 04:54:57.916270   13097 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 04:54:57.916295   13097 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 04:54:57.916299   13097 start.go:728] Will try again in 5 seconds ...
	I0415 04:55:02.918374   13097 start.go:360] acquireMachinesLock for functional-424000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 04:55:02.918870   13097 start.go:364] duration metric: took 370.5µs to acquireMachinesLock for "functional-424000"
	I0415 04:55:02.919031   13097 start.go:96] Skipping create...Using existing machine configuration
	I0415 04:55:02.919047   13097 fix.go:54] fixHost starting: 
	I0415 04:55:02.919791   13097 fix.go:112] recreateIfNeeded on functional-424000: state=Stopped err=<nil>
	W0415 04:55:02.919810   13097 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 04:55:02.924270   13097 out.go:177] * Restarting existing qemu2 VM for "functional-424000" ...
	I0415 04:55:02.932381   13097 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:7c:1b:66:75:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/disk.qcow2
	I0415 04:55:02.942142   13097 main.go:141] libmachine: STDOUT: 
	I0415 04:55:02.942195   13097 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 04:55:02.942270   13097 fix.go:56] duration metric: took 23.228541ms for fixHost
	I0415 04:55:02.942281   13097 start.go:83] releasing machines lock for "functional-424000", held for 23.393333ms
	W0415 04:55:02.942424   13097 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-424000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 04:55:02.949175   13097 out.go:177] 
	W0415 04:55:02.953287   13097 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 04:55:02.953312   13097 out.go:239] * 
	W0415 04:55:02.956956   13097 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 04:55:02.963252   13097 out.go:177] 
	
	
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-424000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-387000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT |                     |
|         | -p download-only-387000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT | 15 Apr 24 04:53 PDT |
| delete  | -p download-only-387000                                                  | download-only-387000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT | 15 Apr 24 04:53 PDT |
| start   | -o=json --download-only                                                  | download-only-219000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT |                     |
|         | -p download-only-219000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.29.3                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT | 15 Apr 24 04:53 PDT |
| delete  | -p download-only-219000                                                  | download-only-219000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT | 15 Apr 24 04:53 PDT |
| start   | -o=json --download-only                                                  | download-only-370000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT |                     |
|         | -p download-only-370000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.30.0-rc.2                                        |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
| delete  | -p download-only-370000                                                  | download-only-370000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
| delete  | -p download-only-387000                                                  | download-only-387000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
| delete  | -p download-only-219000                                                  | download-only-219000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
| delete  | -p download-only-370000                                                  | download-only-370000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
| start   | --download-only -p                                                       | binary-mirror-655000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | binary-mirror-655000                                                     |                      |         |                |                     |                     |
|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
|         | --binary-mirror                                                          |                      |         |                |                     |                     |
|         | http://127.0.0.1:52001                                                   |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | -p binary-mirror-655000                                                  | binary-mirror-655000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
| addons  | enable dashboard -p                                                      | addons-891000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | addons-891000                                                            |                      |         |                |                     |                     |
| addons  | disable dashboard -p                                                     | addons-891000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | addons-891000                                                            |                      |         |                |                     |                     |
| start   | -p addons-891000 --wait=true                                             | addons-891000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
|         | --addons=registry                                                        |                      |         |                |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
| delete  | -p addons-891000                                                         | addons-891000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
| start   | -p nospam-668000 -n=1 --memory=2250 --wait=false                         | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| start   | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| pause   | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| unpause | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| stop    | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| delete  | -p nospam-668000                                                         | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
| start   | -p functional-424000                                                     | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | --memory=4000                                                            |                      |         |                |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
| start   | -p functional-424000                                                     | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
| cache   | functional-424000 cache add                                              | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | functional-424000 cache add                                              | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | functional-424000 cache add                                              | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-424000 cache add                                              | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
|         | minikube-local-cache-test:functional-424000                              |                      |         |                |                     |                     |
| cache   | functional-424000 cache delete                                           | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
|         | minikube-local-cache-test:functional-424000                              |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
| ssh     | functional-424000 ssh sudo                                               | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | crictl images                                                            |                      |         |                |                     |                     |
| ssh     | functional-424000                                                        | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| ssh     | functional-424000 ssh                                                    | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-424000 cache reload                                           | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
| ssh     | functional-424000 ssh                                                    | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| kubectl | functional-424000 kubectl --                                             | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | --context functional-424000                                              |                      |         |                |                     |                     |
|         | get pods                                                                 |                      |         |                |                     |                     |
| start   | -p functional-424000                                                     | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
|         | --wait=all                                                               |                      |         |                |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/04/15 04:54:57
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0415 04:54:57.815469   13097 out.go:291] Setting OutFile to fd 1 ...
I0415 04:54:57.815608   13097 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:54:57.815610   13097 out.go:304] Setting ErrFile to fd 2...
I0415 04:54:57.815612   13097 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:54:57.815730   13097 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
I0415 04:54:57.816746   13097 out.go:298] Setting JSON to false
I0415 04:54:57.832939   13097 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6868,"bootTime":1713175229,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0415 04:54:57.832999   13097 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0415 04:54:57.838155   13097 out.go:177] * [functional-424000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
I0415 04:54:57.846153   13097 out.go:177]   - MINIKUBE_LOCATION=18644
I0415 04:54:57.846208   13097 notify.go:220] Checking for updates...
I0415 04:54:57.850046   13097 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
I0415 04:54:57.854132   13097 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0415 04:54:57.857080   13097 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0415 04:54:57.859988   13097 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
I0415 04:54:57.862991   13097 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0415 04:54:57.866373   13097 config.go:182] Loaded profile config "functional-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 04:54:57.866419   13097 driver.go:392] Setting default libvirt URI to qemu:///system
I0415 04:54:57.871022   13097 out.go:177] * Using the qemu2 driver based on existing profile
I0415 04:54:57.878076   13097 start.go:297] selected driver: qemu2
I0415 04:54:57.878080   13097 start.go:901] validating driver "qemu2" against &{Name:functional-424000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:functional-424000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0415 04:54:57.878122   13097 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0415 04:54:57.880324   13097 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0415 04:54:57.880366   13097 cni.go:84] Creating CNI manager for ""
I0415 04:54:57.880372   13097 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0415 04:54:57.880414   13097 start.go:340] cluster config:
{Name:functional-424000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-424000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0415 04:54:57.884517   13097 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0415 04:54:57.893094   13097 out.go:177] * Starting "functional-424000" primary control-plane node in "functional-424000" cluster
I0415 04:54:57.897944   13097 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
I0415 04:54:57.897958   13097 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
I0415 04:54:57.897965   13097 cache.go:56] Caching tarball of preloaded images
I0415 04:54:57.898037   13097 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0415 04:54:57.898041   13097 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
I0415 04:54:57.898099   13097 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/functional-424000/config.json ...
I0415 04:54:57.898703   13097 start.go:360] acquireMachinesLock for functional-424000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0415 04:54:57.898734   13097 start.go:364] duration metric: took 27.083µs to acquireMachinesLock for "functional-424000"
I0415 04:54:57.898741   13097 start.go:96] Skipping create...Using existing machine configuration
I0415 04:54:57.898746   13097 fix.go:54] fixHost starting: 
I0415 04:54:57.898853   13097 fix.go:112] recreateIfNeeded on functional-424000: state=Stopped err=<nil>
W0415 04:54:57.898862   13097 fix.go:138] unexpected machine state, will restart: <nil>
I0415 04:54:57.909113   13097 out.go:177] * Restarting existing qemu2 VM for "functional-424000" ...
I0415 04:54:57.914148   13097 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:7c:1b:66:75:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/disk.qcow2
I0415 04:54:57.916217   13097 main.go:141] libmachine: STDOUT: 
I0415 04:54:57.916240   13097 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0415 04:54:57.916261   13097 fix.go:56] duration metric: took 17.516333ms for fixHost
I0415 04:54:57.916264   13097 start.go:83] releasing machines lock for "functional-424000", held for 17.527083ms
W0415 04:54:57.916270   13097 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0415 04:54:57.916295   13097 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0415 04:54:57.916299   13097 start.go:728] Will try again in 5 seconds ...
I0415 04:55:02.918374   13097 start.go:360] acquireMachinesLock for functional-424000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0415 04:55:02.918870   13097 start.go:364] duration metric: took 370.5µs to acquireMachinesLock for "functional-424000"
I0415 04:55:02.919031   13097 start.go:96] Skipping create...Using existing machine configuration
I0415 04:55:02.919047   13097 fix.go:54] fixHost starting: 
I0415 04:55:02.919791   13097 fix.go:112] recreateIfNeeded on functional-424000: state=Stopped err=<nil>
W0415 04:55:02.919810   13097 fix.go:138] unexpected machine state, will restart: <nil>
I0415 04:55:02.924270   13097 out.go:177] * Restarting existing qemu2 VM for "functional-424000" ...
I0415 04:55:02.932381   13097 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:7c:1b:66:75:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/disk.qcow2
I0415 04:55:02.942142   13097 main.go:141] libmachine: STDOUT: 
I0415 04:55:02.942195   13097 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0415 04:55:02.942270   13097 fix.go:56] duration metric: took 23.228541ms for fixHost
I0415 04:55:02.942281   13097 start.go:83] releasing machines lock for "functional-424000", held for 23.393333ms
W0415 04:55:02.942424   13097 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-424000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0415 04:55:02.949175   13097 out.go:177] 
W0415 04:55:02.953287   13097 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0415 04:55:02.953312   13097 out.go:239] * 
W0415 04:55:02.956956   13097 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0415 04:55:02.963252   13097 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-424000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-424000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd1378722611/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-387000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT |                     |
|         | -p download-only-387000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT | 15 Apr 24 04:53 PDT |
| delete  | -p download-only-387000                                                  | download-only-387000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT | 15 Apr 24 04:53 PDT |
| start   | -o=json --download-only                                                  | download-only-219000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT |                     |
|         | -p download-only-219000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.29.3                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT | 15 Apr 24 04:53 PDT |
| delete  | -p download-only-219000                                                  | download-only-219000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT | 15 Apr 24 04:53 PDT |
| start   | -o=json --download-only                                                  | download-only-370000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT |                     |
|         | -p download-only-370000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.30.0-rc.2                                        |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
| delete  | -p download-only-370000                                                  | download-only-370000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
| delete  | -p download-only-387000                                                  | download-only-387000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
| delete  | -p download-only-219000                                                  | download-only-219000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
| delete  | -p download-only-370000                                                  | download-only-370000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
| start   | --download-only -p                                                       | binary-mirror-655000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | binary-mirror-655000                                                     |                      |         |                |                     |                     |
|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
|         | --binary-mirror                                                          |                      |         |                |                     |                     |
|         | http://127.0.0.1:52001                                                   |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | -p binary-mirror-655000                                                  | binary-mirror-655000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
| addons  | enable dashboard -p                                                      | addons-891000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | addons-891000                                                            |                      |         |                |                     |                     |
| addons  | disable dashboard -p                                                     | addons-891000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | addons-891000                                                            |                      |         |                |                     |                     |
| start   | -p addons-891000 --wait=true                                             | addons-891000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
|         | --addons=registry                                                        |                      |         |                |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
| delete  | -p addons-891000                                                         | addons-891000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
| start   | -p nospam-668000 -n=1 --memory=2250 --wait=false                         | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| start   | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| pause   | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| unpause | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| stop    | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-668000 --log_dir                                                  | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| delete  | -p nospam-668000                                                         | nospam-668000        | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
| start   | -p functional-424000                                                     | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | --memory=4000                                                            |                      |         |                |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
| start   | -p functional-424000                                                     | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
| cache   | functional-424000 cache add                                              | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | functional-424000 cache add                                              | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | functional-424000 cache add                                              | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-424000 cache add                                              | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
|         | minikube-local-cache-test:functional-424000                              |                      |         |                |                     |                     |
| cache   | functional-424000 cache delete                                           | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
|         | minikube-local-cache-test:functional-424000                              |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
| ssh     | functional-424000 ssh sudo                                               | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | crictl images                                                            |                      |         |                |                     |                     |
| ssh     | functional-424000                                                        | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| ssh     | functional-424000 ssh                                                    | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-424000 cache reload                                           | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
| ssh     | functional-424000 ssh                                                    | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT | 15 Apr 24 04:54 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| kubectl | functional-424000 kubectl --                                             | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | --context functional-424000                                              |                      |         |                |                     |                     |
|         | get pods                                                                 |                      |         |                |                     |                     |
| start   | -p functional-424000                                                     | functional-424000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:54 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
|         | --wait=all                                                               |                      |         |                |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/04/15 04:54:57
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0415 04:54:57.815469   13097 out.go:291] Setting OutFile to fd 1 ...
I0415 04:54:57.815608   13097 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:54:57.815610   13097 out.go:304] Setting ErrFile to fd 2...
I0415 04:54:57.815612   13097 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:54:57.815730   13097 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
I0415 04:54:57.816746   13097 out.go:298] Setting JSON to false
I0415 04:54:57.832939   13097 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6868,"bootTime":1713175229,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0415 04:54:57.832999   13097 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0415 04:54:57.838155   13097 out.go:177] * [functional-424000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
I0415 04:54:57.846153   13097 out.go:177]   - MINIKUBE_LOCATION=18644
I0415 04:54:57.846208   13097 notify.go:220] Checking for updates...
I0415 04:54:57.850046   13097 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
I0415 04:54:57.854132   13097 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0415 04:54:57.857080   13097 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0415 04:54:57.859988   13097 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
I0415 04:54:57.862991   13097 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0415 04:54:57.866373   13097 config.go:182] Loaded profile config "functional-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 04:54:57.866419   13097 driver.go:392] Setting default libvirt URI to qemu:///system
I0415 04:54:57.871022   13097 out.go:177] * Using the qemu2 driver based on existing profile
I0415 04:54:57.878076   13097 start.go:297] selected driver: qemu2
I0415 04:54:57.878080   13097 start.go:901] validating driver "qemu2" against &{Name:functional-424000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:functional-424000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0415 04:54:57.878122   13097 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0415 04:54:57.880324   13097 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0415 04:54:57.880366   13097 cni.go:84] Creating CNI manager for ""
I0415 04:54:57.880372   13097 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0415 04:54:57.880414   13097 start.go:340] cluster config:
{Name:functional-424000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-424000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0415 04:54:57.884517   13097 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0415 04:54:57.893094   13097 out.go:177] * Starting "functional-424000" primary control-plane node in "functional-424000" cluster
I0415 04:54:57.897944   13097 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
I0415 04:54:57.897958   13097 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
I0415 04:54:57.897965   13097 cache.go:56] Caching tarball of preloaded images
I0415 04:54:57.898037   13097 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0415 04:54:57.898041   13097 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
I0415 04:54:57.898099   13097 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/functional-424000/config.json ...
I0415 04:54:57.898703   13097 start.go:360] acquireMachinesLock for functional-424000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0415 04:54:57.898734   13097 start.go:364] duration metric: took 27.083µs to acquireMachinesLock for "functional-424000"
I0415 04:54:57.898741   13097 start.go:96] Skipping create...Using existing machine configuration
I0415 04:54:57.898746   13097 fix.go:54] fixHost starting: 
I0415 04:54:57.898853   13097 fix.go:112] recreateIfNeeded on functional-424000: state=Stopped err=<nil>
W0415 04:54:57.898862   13097 fix.go:138] unexpected machine state, will restart: <nil>
I0415 04:54:57.909113   13097 out.go:177] * Restarting existing qemu2 VM for "functional-424000" ...
I0415 04:54:57.914148   13097 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:7c:1b:66:75:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/disk.qcow2
I0415 04:54:57.916217   13097 main.go:141] libmachine: STDOUT: 
I0415 04:54:57.916240   13097 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0415 04:54:57.916261   13097 fix.go:56] duration metric: took 17.516333ms for fixHost
I0415 04:54:57.916264   13097 start.go:83] releasing machines lock for "functional-424000", held for 17.527083ms
W0415 04:54:57.916270   13097 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0415 04:54:57.916295   13097 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0415 04:54:57.916299   13097 start.go:728] Will try again in 5 seconds ...
I0415 04:55:02.918374   13097 start.go:360] acquireMachinesLock for functional-424000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0415 04:55:02.918870   13097 start.go:364] duration metric: took 370.5µs to acquireMachinesLock for "functional-424000"
I0415 04:55:02.919031   13097 start.go:96] Skipping create...Using existing machine configuration
I0415 04:55:02.919047   13097 fix.go:54] fixHost starting: 
I0415 04:55:02.919791   13097 fix.go:112] recreateIfNeeded on functional-424000: state=Stopped err=<nil>
W0415 04:55:02.919810   13097 fix.go:138] unexpected machine state, will restart: <nil>
I0415 04:55:02.924270   13097 out.go:177] * Restarting existing qemu2 VM for "functional-424000" ...
I0415 04:55:02.932381   13097 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:7c:1b:66:75:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/functional-424000/disk.qcow2
I0415 04:55:02.942142   13097 main.go:141] libmachine: STDOUT: 
I0415 04:55:02.942195   13097 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0415 04:55:02.942270   13097 fix.go:56] duration metric: took 23.228541ms for fixHost
I0415 04:55:02.942281   13097 start.go:83] releasing machines lock for "functional-424000", held for 23.393333ms
W0415 04:55:02.942424   13097 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-424000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0415 04:55:02.949175   13097 out.go:177] 
W0415 04:55:02.953287   13097 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0415 04:55:02.953312   13097 out.go:239] * 
W0415 04:55:02.956956   13097 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0415 04:55:02.963252   13097 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-424000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-424000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.498792ms)

                                                
                                                
** stderr ** 
	error: context "functional-424000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-424000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-424000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-424000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-424000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-424000 --alsologtostderr -v=1] stderr:
I0415 04:55:44.933233   13425 out.go:291] Setting OutFile to fd 1 ...
I0415 04:55:44.933657   13425 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:55:44.933661   13425 out.go:304] Setting ErrFile to fd 2...
I0415 04:55:44.933664   13425 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:55:44.933836   13425 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
I0415 04:55:44.934049   13425 mustload.go:65] Loading cluster: functional-424000
I0415 04:55:44.934247   13425 config.go:182] Loaded profile config "functional-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 04:55:44.938153   13425 out.go:177] * The control-plane node functional-424000 host is not running: state=Stopped
I0415 04:55:44.941956   13425 out.go:177]   To start a cluster, run: "minikube start -p functional-424000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000: exit status 7 (44.052333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 status: exit status 7 (31.71725ms)

                                                
                                                
-- stdout --
	functional-424000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-424000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (31.865625ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-424000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 status -o json: exit status 7 (32.131375ms)

                                                
                                                
-- stdout --
	{"Name":"functional-424000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-424000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000: exit status 7 (31.862167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-424000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-424000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.278125ms)

                                                
                                                
** stderr ** 
	error: context "functional-424000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-424000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-424000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-424000 describe po hello-node-connect: exit status 1 (26.260042ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-424000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-424000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-424000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-424000 logs -l app=hello-node-connect: exit status 1 (26.499167ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-424000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-424000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-424000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-424000 describe svc hello-node-connect: exit status 1 (26.252541ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-424000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-424000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000: exit status 7 (32.390459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-424000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000: exit status 7 (31.979542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "echo hello": exit status 83 (50.978416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-424000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-424000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-424000\"\n"*. args "out/minikube-darwin-arm64 -p functional-424000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "cat /etc/hostname": exit status 83 (50.920167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-424000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-424000"- but got *"* The control-plane node functional-424000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-424000\"\n"*. args "out/minikube-darwin-arm64 -p functional-424000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000: exit status 7 (36.099875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (57.564167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-424000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh -n functional-424000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh -n functional-424000 "sudo cat /home/docker/cp-test.txt": exit status 83 (46.053834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-424000 ssh -n functional-424000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-424000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-424000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 cp functional-424000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3034470053/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 cp functional-424000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3034470053/001/cp-test.txt: exit status 83 (43.579291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-424000 cp functional-424000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3034470053/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh -n functional-424000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh -n functional-424000 "sudo cat /home/docker/cp-test.txt": exit status 83 (56.694875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-424000 ssh -n functional-424000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3034470053/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-424000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-424000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (47.762459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-424000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh -n functional-424000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh -n functional-424000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (43.038041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-424000 ssh -n functional-424000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-424000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-424000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/12715/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "sudo cat /etc/test/nested/copy/12715/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "sudo cat /etc/test/nested/copy/12715/hosts": exit status 83 (42.539375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-424000 ssh "sudo cat /etc/test/nested/copy/12715/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-424000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-424000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-424000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-424000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000: exit status 7 (32.177625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/12715.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "sudo cat /etc/ssl/certs/12715.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "sudo cat /etc/ssl/certs/12715.pem": exit status 83 (42.190375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/12715.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-424000 ssh \"sudo cat /etc/ssl/certs/12715.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/12715.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-424000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-424000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/12715.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "sudo cat /usr/share/ca-certificates/12715.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "sudo cat /usr/share/ca-certificates/12715.pem": exit status 83 (42.6495ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/12715.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-424000 ssh \"sudo cat /usr/share/ca-certificates/12715.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/12715.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-424000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-424000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (46.716125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-424000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-424000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-424000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/127152.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "sudo cat /etc/ssl/certs/127152.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "sudo cat /etc/ssl/certs/127152.pem": exit status 83 (42.662583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/127152.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-424000 ssh \"sudo cat /etc/ssl/certs/127152.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/127152.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-424000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-424000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/127152.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "sudo cat /usr/share/ca-certificates/127152.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "sudo cat /usr/share/ca-certificates/127152.pem": exit status 83 (42.718833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/127152.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-424000 ssh \"sudo cat /usr/share/ca-certificates/127152.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/127152.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-424000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-424000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (47.631125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-424000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-424000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-424000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000: exit status 7 (34.026667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-424000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-424000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.061292ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-424000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-424000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-424000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-424000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-424000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-424000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-424000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-424000 -n functional-424000: exit status 7 (32.91775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "sudo systemctl is-active crio": exit status 83 (53.622417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-424000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-424000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 version -o=json --components: exit status 83 (43.820375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-424000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-424000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-424000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-424000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-424000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-424000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-424000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-424000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-424000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-424000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-424000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-424000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-424000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-424000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-424000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-424000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-424000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-424000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-424000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-424000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-424000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-424000 image ls --format short --alsologtostderr:
I0415 04:55:45.349291   13440 out.go:291] Setting OutFile to fd 1 ...
I0415 04:55:45.349450   13440 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:55:45.349453   13440 out.go:304] Setting ErrFile to fd 2...
I0415 04:55:45.349455   13440 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:55:45.349602   13440 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
I0415 04:55:45.350027   13440 config.go:182] Loaded profile config "functional-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 04:55:45.350084   13440 config.go:182] Loaded profile config "functional-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-424000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-424000 image ls --format table --alsologtostderr:
I0415 04:55:45.577535   13452 out.go:291] Setting OutFile to fd 1 ...
I0415 04:55:45.577689   13452 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:55:45.577692   13452 out.go:304] Setting ErrFile to fd 2...
I0415 04:55:45.577695   13452 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:55:45.577815   13452 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
I0415 04:55:45.578231   13452 config.go:182] Loaded profile config "functional-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 04:55:45.578288   13452 config.go:182] Loaded profile config "functional-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-424000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-424000 image ls --format json --alsologtostderr:
I0415 04:55:45.539420   13450 out.go:291] Setting OutFile to fd 1 ...
I0415 04:55:45.539562   13450 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:55:45.539566   13450 out.go:304] Setting ErrFile to fd 2...
I0415 04:55:45.539568   13450 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:55:45.539695   13450 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
I0415 04:55:45.540108   13450 config.go:182] Loaded profile config "functional-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 04:55:45.540169   13450 config.go:182] Loaded profile config "functional-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-424000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-424000 image ls --format yaml --alsologtostderr:
I0415 04:55:45.386722   13442 out.go:291] Setting OutFile to fd 1 ...
I0415 04:55:45.386867   13442 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:55:45.386871   13442 out.go:304] Setting ErrFile to fd 2...
I0415 04:55:45.386873   13442 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:55:45.387000   13442 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
I0415 04:55:45.387422   13442 config.go:182] Loaded profile config "functional-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 04:55:45.387483   13442 config.go:182] Loaded profile config "functional-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh pgrep buildkitd: exit status 83 (42.822958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 image build -t localhost/my-image:functional-424000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-424000 image build -t localhost/my-image:functional-424000 testdata/build --alsologtostderr:
I0415 04:55:45.466745   13446 out.go:291] Setting OutFile to fd 1 ...
I0415 04:55:45.467127   13446 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:55:45.467131   13446 out.go:304] Setting ErrFile to fd 2...
I0415 04:55:45.467134   13446 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:55:45.467296   13446 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
I0415 04:55:45.467774   13446 config.go:182] Loaded profile config "functional-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 04:55:45.468210   13446 config.go:182] Loaded profile config "functional-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 04:55:45.468449   13446 build_images.go:133] succeeded building to: 
I0415 04:55:45.468452   13446 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 image ls
functional_test.go:442: expected "localhost/my-image:functional-424000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-424000 docker-env) && out/minikube-darwin-arm64 status -p functional-424000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-424000 docker-env) && out/minikube-darwin-arm64 status -p functional-424000": exit status 1 (47.89725ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 update-context --alsologtostderr -v=2: exit status 83 (43.982833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 04:55:45.216679   13434 out.go:291] Setting OutFile to fd 1 ...
	I0415 04:55:45.217351   13434 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:55:45.217354   13434 out.go:304] Setting ErrFile to fd 2...
	I0415 04:55:45.217357   13434 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:55:45.217497   13434 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 04:55:45.217696   13434 mustload.go:65] Loading cluster: functional-424000
	I0415 04:55:45.217910   13434 config.go:182] Loaded profile config "functional-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 04:55:45.221682   13434 out.go:177] * The control-plane node functional-424000 host is not running: state=Stopped
	I0415 04:55:45.225724   13434 out.go:177]   To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-424000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-424000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-424000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 update-context --alsologtostderr -v=2: exit status 83 (43.671417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 04:55:45.305441   13438 out.go:291] Setting OutFile to fd 1 ...
	I0415 04:55:45.305577   13438 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:55:45.305581   13438 out.go:304] Setting ErrFile to fd 2...
	I0415 04:55:45.305583   13438 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:55:45.305725   13438 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 04:55:45.305956   13438 mustload.go:65] Loading cluster: functional-424000
	I0415 04:55:45.306162   13438 config.go:182] Loaded profile config "functional-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 04:55:45.309720   13438 out.go:177] * The control-plane node functional-424000 host is not running: state=Stopped
	I0415 04:55:45.313712   13438 out.go:177]   To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-424000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-424000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-424000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 update-context --alsologtostderr -v=2: exit status 83 (43.513792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 04:55:45.261332   13436 out.go:291] Setting OutFile to fd 1 ...
	I0415 04:55:45.261517   13436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:55:45.261520   13436 out.go:304] Setting ErrFile to fd 2...
	I0415 04:55:45.261522   13436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:55:45.261632   13436 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 04:55:45.261853   13436 mustload.go:65] Loading cluster: functional-424000
	I0415 04:55:45.262070   13436 config.go:182] Loaded profile config "functional-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 04:55:45.265794   13436 out.go:177] * The control-plane node functional-424000 host is not running: state=Stopped
	I0415 04:55:45.269754   13436 out.go:177]   To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-424000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-424000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-424000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-424000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-424000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.460584ms)

                                                
                                                
** stderr ** 
	error: context "functional-424000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-424000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 service list: exit status 83 (46.875333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-424000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-424000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-424000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 service list -o json: exit status 83 (42.711875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-424000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 service --namespace=default --https --url hello-node: exit status 83 (44.857458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-424000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 service hello-node --url --format={{.IP}}: exit status 83 (44.888083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-424000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-424000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-424000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 service hello-node --url: exit status 83 (44.927084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-424000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-424000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-424000"
functional_test.go:1565: failed to parse "* The control-plane node functional-424000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-424000\"": parse "* The control-plane node functional-424000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-424000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-424000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-424000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0415 04:55:04.945362   13215 out.go:291] Setting OutFile to fd 1 ...
I0415 04:55:04.945539   13215 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:55:04.945543   13215 out.go:304] Setting ErrFile to fd 2...
I0415 04:55:04.945545   13215 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:55:04.945703   13215 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
I0415 04:55:04.945945   13215 mustload.go:65] Loading cluster: functional-424000
I0415 04:55:04.946148   13215 config.go:182] Loaded profile config "functional-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 04:55:04.950447   13215 out.go:177] * The control-plane node functional-424000 host is not running: state=Stopped
I0415 04:55:04.960537   13215 out.go:177]   To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
stdout: * The control-plane node functional-424000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-424000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-424000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 13216: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-424000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-424000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-424000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-424000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-424000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-424000": client config: context "functional-424000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (119.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-424000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-424000 get svc nginx-svc: exit status 1 (69.246208ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-424000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-424000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (119.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 image load --daemon gcr.io/google-containers/addon-resizer:functional-424000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-424000 image load --daemon gcr.io/google-containers/addon-resizer:functional-424000 --alsologtostderr: (1.258063542s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-424000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 image load --daemon gcr.io/google-containers/addon-resizer:functional-424000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-424000 image load --daemon gcr.io/google-containers/addon-resizer:functional-424000 --alsologtostderr: (1.286650125s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-424000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.312605667s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-424000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 image load --daemon gcr.io/google-containers/addon-resizer:functional-424000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-424000 image load --daemon gcr.io/google-containers/addon-resizer:functional-424000 --alsologtostderr: (1.159731333s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-424000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 image save gcr.io/google-containers/addon-resizer:functional-424000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-424000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.031813708s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 15 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (26.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (26.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-771000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-771000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.771634375s)

                                                
                                                
-- stdout --
	* [ha-771000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-771000" primary control-plane node in "ha-771000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-771000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 04:57:56.931137   13514 out.go:291] Setting OutFile to fd 1 ...
	I0415 04:57:56.931281   13514 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:57:56.931284   13514 out.go:304] Setting ErrFile to fd 2...
	I0415 04:57:56.931287   13514 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:57:56.931425   13514 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 04:57:56.932410   13514 out.go:298] Setting JSON to false
	I0415 04:57:56.948696   13514 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7047,"bootTime":1713175229,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 04:57:56.948761   13514 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 04:57:56.954615   13514 out.go:177] * [ha-771000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 04:57:56.961700   13514 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 04:57:56.966615   13514 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 04:57:56.961733   13514 notify.go:220] Checking for updates...
	I0415 04:57:56.972662   13514 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 04:57:56.975563   13514 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 04:57:56.978637   13514 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 04:57:56.981639   13514 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 04:57:56.984878   13514 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 04:57:56.989647   13514 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 04:57:56.996509   13514 start.go:297] selected driver: qemu2
	I0415 04:57:56.996517   13514 start.go:901] validating driver "qemu2" against <nil>
	I0415 04:57:56.996523   13514 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 04:57:56.998816   13514 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 04:57:57.001632   13514 out.go:177] * Automatically selected the socket_vmnet network
	I0415 04:57:57.004728   13514 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 04:57:57.004771   13514 cni.go:84] Creating CNI manager for ""
	I0415 04:57:57.004776   13514 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0415 04:57:57.004788   13514 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0415 04:57:57.004832   13514 start.go:340] cluster config:
	{Name:ha-771000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-771000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 04:57:57.009249   13514 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 04:57:57.016607   13514 out.go:177] * Starting "ha-771000" primary control-plane node in "ha-771000" cluster
	I0415 04:57:57.020665   13514 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 04:57:57.020681   13514 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 04:57:57.020691   13514 cache.go:56] Caching tarball of preloaded images
	I0415 04:57:57.020747   13514 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 04:57:57.020753   13514 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 04:57:57.020970   13514 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/ha-771000/config.json ...
	I0415 04:57:57.020983   13514 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/ha-771000/config.json: {Name:mk5bb2cf43eb9adbbf99528eb9727c0c27c33f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 04:57:57.021396   13514 start.go:360] acquireMachinesLock for ha-771000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 04:57:57.021429   13514 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "ha-771000"
	I0415 04:57:57.021442   13514 start.go:93] Provisioning new machine with config: &{Name:ha-771000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.29.3 ClusterName:ha-771000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 04:57:57.021471   13514 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 04:57:57.030615   13514 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 04:57:57.048273   13514 start.go:159] libmachine.API.Create for "ha-771000" (driver="qemu2")
	I0415 04:57:57.048302   13514 client.go:168] LocalClient.Create starting
	I0415 04:57:57.048364   13514 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 04:57:57.048394   13514 main.go:141] libmachine: Decoding PEM data...
	I0415 04:57:57.048401   13514 main.go:141] libmachine: Parsing certificate...
	I0415 04:57:57.048439   13514 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 04:57:57.048461   13514 main.go:141] libmachine: Decoding PEM data...
	I0415 04:57:57.048470   13514 main.go:141] libmachine: Parsing certificate...
	I0415 04:57:57.048876   13514 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 04:57:57.174268   13514 main.go:141] libmachine: Creating SSH key...
	I0415 04:57:57.237995   13514 main.go:141] libmachine: Creating Disk image...
	I0415 04:57:57.238001   13514 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 04:57:57.238186   13514 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/disk.qcow2
	I0415 04:57:57.250696   13514 main.go:141] libmachine: STDOUT: 
	I0415 04:57:57.250718   13514 main.go:141] libmachine: STDERR: 
	I0415 04:57:57.250766   13514 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/disk.qcow2 +20000M
	I0415 04:57:57.261628   13514 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 04:57:57.261645   13514 main.go:141] libmachine: STDERR: 
	I0415 04:57:57.261666   13514 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/disk.qcow2
	I0415 04:57:57.261671   13514 main.go:141] libmachine: Starting QEMU VM...
	I0415 04:57:57.261693   13514 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:a4:ca:ba:9c:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/disk.qcow2
	I0415 04:57:57.263436   13514 main.go:141] libmachine: STDOUT: 
	I0415 04:57:57.263454   13514 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 04:57:57.263473   13514 client.go:171] duration metric: took 215.169875ms to LocalClient.Create
	I0415 04:57:59.265592   13514 start.go:128] duration metric: took 2.2441565s to createHost
	I0415 04:57:59.265720   13514 start.go:83] releasing machines lock for "ha-771000", held for 2.244281s
	W0415 04:57:59.265787   13514 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 04:57:59.277924   13514 out.go:177] * Deleting "ha-771000" in qemu2 ...
	W0415 04:57:59.302896   13514 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 04:57:59.302934   13514 start.go:728] Will try again in 5 seconds ...
	I0415 04:58:04.305105   13514 start.go:360] acquireMachinesLock for ha-771000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 04:58:04.305579   13514 start.go:364] duration metric: took 364.042µs to acquireMachinesLock for "ha-771000"
	I0415 04:58:04.305702   13514 start.go:93] Provisioning new machine with config: &{Name:ha-771000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.29.3 ClusterName:ha-771000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 04:58:04.305967   13514 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 04:58:04.315495   13514 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 04:58:04.366009   13514 start.go:159] libmachine.API.Create for "ha-771000" (driver="qemu2")
	I0415 04:58:04.366064   13514 client.go:168] LocalClient.Create starting
	I0415 04:58:04.366176   13514 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 04:58:04.366232   13514 main.go:141] libmachine: Decoding PEM data...
	I0415 04:58:04.366247   13514 main.go:141] libmachine: Parsing certificate...
	I0415 04:58:04.366311   13514 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 04:58:04.366355   13514 main.go:141] libmachine: Decoding PEM data...
	I0415 04:58:04.366369   13514 main.go:141] libmachine: Parsing certificate...
	I0415 04:58:04.366885   13514 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 04:58:04.501376   13514 main.go:141] libmachine: Creating SSH key...
	I0415 04:58:04.602887   13514 main.go:141] libmachine: Creating Disk image...
	I0415 04:58:04.602893   13514 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 04:58:04.603088   13514 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/disk.qcow2
	I0415 04:58:04.615832   13514 main.go:141] libmachine: STDOUT: 
	I0415 04:58:04.615852   13514 main.go:141] libmachine: STDERR: 
	I0415 04:58:04.615906   13514 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/disk.qcow2 +20000M
	I0415 04:58:04.626973   13514 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 04:58:04.626991   13514 main.go:141] libmachine: STDERR: 
	I0415 04:58:04.627001   13514 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/disk.qcow2
	I0415 04:58:04.627004   13514 main.go:141] libmachine: Starting QEMU VM...
	I0415 04:58:04.627046   13514 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:2d:88:bb:a9:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/disk.qcow2
	I0415 04:58:04.628638   13514 main.go:141] libmachine: STDOUT: 
	I0415 04:58:04.628655   13514 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 04:58:04.628667   13514 client.go:171] duration metric: took 262.603875ms to LocalClient.Create
	I0415 04:58:06.630790   13514 start.go:128] duration metric: took 2.324855167s to createHost
	I0415 04:58:06.630847   13514 start.go:83] releasing machines lock for "ha-771000", held for 2.325300417s
	W0415 04:58:06.631198   13514 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-771000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-771000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 04:58:06.638873   13514 out.go:177] 
	W0415 04:58:06.644930   13514 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 04:58:06.644957   13514 out.go:239] * 
	* 
	W0415 04:58:06.647282   13514 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 04:58:06.657841   13514 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-771000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (73.2955ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (113.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-771000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (62.309292ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-771000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-771000 -- rollout status deployment/busybox: exit status 1 (59.106417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-771000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (59.339584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-771000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.250708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-771000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.255875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-771000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.767083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-771000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.793542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-771000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.669625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-771000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.367875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-771000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.165667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-771000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.649333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-771000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.429834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-771000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.718583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-771000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (59.49975ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-771000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-771000 -- exec  -- nslookup kubernetes.io: exit status 1 (59.226625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-771000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-771000 -- exec  -- nslookup kubernetes.default: exit status 1 (59.193709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-771000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-771000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.646791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-771000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (31.212125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (113.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (59.607833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-771000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (32.101667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-771000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-771000 -v=7 --alsologtostderr: exit status 83 (43.318208ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-771000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-771000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:00:00.849690   13613 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:00:00.850122   13613 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:00.850125   13613 out.go:304] Setting ErrFile to fd 2...
	I0415 05:00:00.850127   13613 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:00.850248   13613 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:00:00.850455   13613 mustload.go:65] Loading cluster: ha-771000
	I0415 05:00:00.850633   13613 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:00:00.854590   13613 out.go:177] * The control-plane node ha-771000 host is not running: state=Stopped
	I0415 05:00:00.858579   13613 out.go:177]   To start a cluster, run: "minikube start -p ha-771000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-771000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (31.056958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-771000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-771000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (25.589ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-771000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-771000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-771000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (31.245375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-771000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-771000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-771000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-771000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-771000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-771000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-771000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-771000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (32.237125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-771000 status --output json -v=7 --alsologtostderr: exit status 7 (31.921375ms)

                                                
                                                
-- stdout --
	{"Name":"ha-771000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:00:01.088425   13636 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:00:01.088568   13636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:01.088572   13636 out.go:304] Setting ErrFile to fd 2...
	I0415 05:00:01.088574   13636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:01.088707   13636 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:00:01.088849   13636 out.go:298] Setting JSON to true
	I0415 05:00:01.088859   13636 mustload.go:65] Loading cluster: ha-771000
	I0415 05:00:01.088929   13636 notify.go:220] Checking for updates...
	I0415 05:00:01.089092   13636 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:00:01.089098   13636 status.go:255] checking status of ha-771000 ...
	I0415 05:00:01.089310   13636 status.go:330] ha-771000 host status = "Stopped" (err=<nil>)
	I0415 05:00:01.089314   13636 status.go:343] host is not running, skipping remaining checks
	I0415 05:00:01.089316   13636 status.go:257] ha-771000 status: &{Name:ha-771000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-771000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (31.8705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-771000 node stop m02 -v=7 --alsologtostderr: exit status 85 (54.458417ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:00:01.153376   13640 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:00:01.153786   13640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:01.153790   13640 out.go:304] Setting ErrFile to fd 2...
	I0415 05:00:01.153792   13640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:01.153972   13640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:00:01.154217   13640 mustload.go:65] Loading cluster: ha-771000
	I0415 05:00:01.154408   13640 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:00:01.158581   13640 out.go:177] 
	W0415 05:00:01.161647   13640 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0415 05:00:01.161652   13640 out.go:239] * 
	* 
	W0415 05:00:01.163670   13640 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:00:01.168555   13640 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-771000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr: exit status 7 (32.466541ms)

                                                
                                                
-- stdout --
	ha-771000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:00:01.208096   13642 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:00:01.208229   13642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:01.208232   13642 out.go:304] Setting ErrFile to fd 2...
	I0415 05:00:01.208235   13642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:01.208364   13642 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:00:01.208496   13642 out.go:298] Setting JSON to false
	I0415 05:00:01.208506   13642 mustload.go:65] Loading cluster: ha-771000
	I0415 05:00:01.208561   13642 notify.go:220] Checking for updates...
	I0415 05:00:01.208730   13642 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:00:01.208736   13642 status.go:255] checking status of ha-771000 ...
	I0415 05:00:01.208943   13642 status.go:330] ha-771000 host status = "Stopped" (err=<nil>)
	I0415 05:00:01.208947   13642 status.go:343] host is not running, skipping remaining checks
	I0415 05:00:01.208950   13642 status.go:257] ha-771000 status: &{Name:ha-771000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr": ha-771000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr": ha-771000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr": ha-771000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr": ha-771000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (31.814458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-771000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-771000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-771000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-771000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (31.608666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (54.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-771000 node start m02 -v=7 --alsologtostderr: exit status 85 (49.844959ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:00:01.376350   13652 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:00:01.376770   13652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:01.376774   13652 out.go:304] Setting ErrFile to fd 2...
	I0415 05:00:01.376776   13652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:01.376924   13652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:00:01.377129   13652 mustload.go:65] Loading cluster: ha-771000
	I0415 05:00:01.377348   13652 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:00:01.381653   13652 out.go:177] 
	W0415 05:00:01.384634   13652 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0415 05:00:01.384639   13652 out.go:239] * 
	* 
	W0415 05:00:01.386957   13652 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:00:01.391595   13652 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0415 05:00:01.376350   13652 out.go:291] Setting OutFile to fd 1 ...
I0415 05:00:01.376770   13652 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 05:00:01.376774   13652 out.go:304] Setting ErrFile to fd 2...
I0415 05:00:01.376776   13652 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 05:00:01.376924   13652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
I0415 05:00:01.377129   13652 mustload.go:65] Loading cluster: ha-771000
I0415 05:00:01.377348   13652 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 05:00:01.381653   13652 out.go:177] 
W0415 05:00:01.384634   13652 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0415 05:00:01.384639   13652 out.go:239] * 
* 
W0415 05:00:01.386957   13652 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0415 05:00:01.391595   13652 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-771000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr: exit status 7 (32.327833ms)

                                                
                                                
-- stdout --
	ha-771000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:00:01.426232   13654 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:00:01.426401   13654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:01.426404   13654 out.go:304] Setting ErrFile to fd 2...
	I0415 05:00:01.426406   13654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:01.426546   13654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:00:01.426680   13654 out.go:298] Setting JSON to false
	I0415 05:00:01.426690   13654 mustload.go:65] Loading cluster: ha-771000
	I0415 05:00:01.426760   13654 notify.go:220] Checking for updates...
	I0415 05:00:01.426877   13654 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:00:01.426882   13654 status.go:255] checking status of ha-771000 ...
	I0415 05:00:01.427092   13654 status.go:330] ha-771000 host status = "Stopped" (err=<nil>)
	I0415 05:00:01.427096   13654 status.go:343] host is not running, skipping remaining checks
	I0415 05:00:01.427098   13654 status.go:257] ha-771000 status: &{Name:ha-771000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr: exit status 7 (76.320125ms)

                                                
                                                
-- stdout --
	ha-771000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:00:02.449551   13659 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:00:02.449747   13659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:02.449751   13659 out.go:304] Setting ErrFile to fd 2...
	I0415 05:00:02.449754   13659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:02.449912   13659 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:00:02.450065   13659 out.go:298] Setting JSON to false
	I0415 05:00:02.450078   13659 mustload.go:65] Loading cluster: ha-771000
	I0415 05:00:02.450122   13659 notify.go:220] Checking for updates...
	I0415 05:00:02.450328   13659 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:00:02.450335   13659 status.go:255] checking status of ha-771000 ...
	I0415 05:00:02.450605   13659 status.go:330] ha-771000 host status = "Stopped" (err=<nil>)
	I0415 05:00:02.450610   13659 status.go:343] host is not running, skipping remaining checks
	I0415 05:00:02.450613   13659 status.go:257] ha-771000 status: &{Name:ha-771000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr: exit status 7 (77.904375ms)

                                                
                                                
-- stdout --
	ha-771000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:00:04.080475   13661 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:00:04.080660   13661 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:04.080664   13661 out.go:304] Setting ErrFile to fd 2...
	I0415 05:00:04.080667   13661 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:04.080835   13661 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:00:04.081007   13661 out.go:298] Setting JSON to false
	I0415 05:00:04.081022   13661 mustload.go:65] Loading cluster: ha-771000
	I0415 05:00:04.081046   13661 notify.go:220] Checking for updates...
	I0415 05:00:04.081276   13661 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:00:04.081283   13661 status.go:255] checking status of ha-771000 ...
	I0415 05:00:04.081596   13661 status.go:330] ha-771000 host status = "Stopped" (err=<nil>)
	I0415 05:00:04.081601   13661 status.go:343] host is not running, skipping remaining checks
	I0415 05:00:04.081604   13661 status.go:257] ha-771000 status: &{Name:ha-771000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr: exit status 7 (76.72825ms)

                                                
                                                
-- stdout --
	ha-771000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:00:05.913395   13663 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:00:05.913586   13663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:05.913590   13663 out.go:304] Setting ErrFile to fd 2...
	I0415 05:00:05.913593   13663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:05.913767   13663 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:00:05.913921   13663 out.go:298] Setting JSON to false
	I0415 05:00:05.913934   13663 mustload.go:65] Loading cluster: ha-771000
	I0415 05:00:05.913984   13663 notify.go:220] Checking for updates...
	I0415 05:00:05.914195   13663 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:00:05.914208   13663 status.go:255] checking status of ha-771000 ...
	I0415 05:00:05.914503   13663 status.go:330] ha-771000 host status = "Stopped" (err=<nil>)
	I0415 05:00:05.914508   13663 status.go:343] host is not running, skipping remaining checks
	I0415 05:00:05.914511   13663 status.go:257] ha-771000 status: &{Name:ha-771000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr: exit status 7 (76.138375ms)

                                                
                                                
-- stdout --
	ha-771000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:00:10.297855   13667 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:00:10.298059   13667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:10.298063   13667 out.go:304] Setting ErrFile to fd 2...
	I0415 05:00:10.298067   13667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:10.298231   13667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:00:10.298394   13667 out.go:298] Setting JSON to false
	I0415 05:00:10.298407   13667 mustload.go:65] Loading cluster: ha-771000
	I0415 05:00:10.298443   13667 notify.go:220] Checking for updates...
	I0415 05:00:10.298686   13667 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:00:10.298696   13667 status.go:255] checking status of ha-771000 ...
	I0415 05:00:10.298956   13667 status.go:330] ha-771000 host status = "Stopped" (err=<nil>)
	I0415 05:00:10.298961   13667 status.go:343] host is not running, skipping remaining checks
	I0415 05:00:10.298964   13667 status.go:257] ha-771000 status: &{Name:ha-771000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr: exit status 7 (74.586125ms)

                                                
                                                
-- stdout --
	ha-771000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:00:15.194139   13669 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:00:15.194350   13669 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:15.194354   13669 out.go:304] Setting ErrFile to fd 2...
	I0415 05:00:15.194357   13669 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:15.194532   13669 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:00:15.194716   13669 out.go:298] Setting JSON to false
	I0415 05:00:15.194729   13669 mustload.go:65] Loading cluster: ha-771000
	I0415 05:00:15.194762   13669 notify.go:220] Checking for updates...
	I0415 05:00:15.194982   13669 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:00:15.194989   13669 status.go:255] checking status of ha-771000 ...
	I0415 05:00:15.195246   13669 status.go:330] ha-771000 host status = "Stopped" (err=<nil>)
	I0415 05:00:15.195251   13669 status.go:343] host is not running, skipping remaining checks
	I0415 05:00:15.195253   13669 status.go:257] ha-771000 status: &{Name:ha-771000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr: exit status 7 (75.707125ms)

                                                
                                                
-- stdout --
	ha-771000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:00:20.145135   13671 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:00:20.145342   13671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:20.145347   13671 out.go:304] Setting ErrFile to fd 2...
	I0415 05:00:20.145350   13671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:20.145525   13671 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:00:20.145690   13671 out.go:298] Setting JSON to false
	I0415 05:00:20.145703   13671 mustload.go:65] Loading cluster: ha-771000
	I0415 05:00:20.145749   13671 notify.go:220] Checking for updates...
	I0415 05:00:20.145965   13671 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:00:20.145974   13671 status.go:255] checking status of ha-771000 ...
	I0415 05:00:20.146244   13671 status.go:330] ha-771000 host status = "Stopped" (err=<nil>)
	I0415 05:00:20.146249   13671 status.go:343] host is not running, skipping remaining checks
	I0415 05:00:20.146252   13671 status.go:257] ha-771000 status: &{Name:ha-771000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr: exit status 7 (77.174041ms)

                                                
                                                
-- stdout --
	ha-771000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:00:30.853853   13673 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:00:30.854060   13673 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:30.854064   13673 out.go:304] Setting ErrFile to fd 2...
	I0415 05:00:30.854067   13673 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:30.854227   13673 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:00:30.854392   13673 out.go:298] Setting JSON to false
	I0415 05:00:30.854406   13673 mustload.go:65] Loading cluster: ha-771000
	I0415 05:00:30.854447   13673 notify.go:220] Checking for updates...
	I0415 05:00:30.854655   13673 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:00:30.854662   13673 status.go:255] checking status of ha-771000 ...
	I0415 05:00:30.854923   13673 status.go:330] ha-771000 host status = "Stopped" (err=<nil>)
	I0415 05:00:30.854928   13673 status.go:343] host is not running, skipping remaining checks
	I0415 05:00:30.854931   13673 status.go:257] ha-771000 status: &{Name:ha-771000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr: exit status 7 (76.3235ms)

                                                
                                                
-- stdout --
	ha-771000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:00:55.537299   13681 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:00:55.537524   13681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:55.537528   13681 out.go:304] Setting ErrFile to fd 2...
	I0415 05:00:55.537532   13681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:55.537671   13681 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:00:55.537830   13681 out.go:298] Setting JSON to false
	I0415 05:00:55.537842   13681 mustload.go:65] Loading cluster: ha-771000
	I0415 05:00:55.537882   13681 notify.go:220] Checking for updates...
	I0415 05:00:55.538111   13681 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:00:55.538119   13681 status.go:255] checking status of ha-771000 ...
	I0415 05:00:55.538386   13681 status.go:330] ha-771000 host status = "Stopped" (err=<nil>)
	I0415 05:00:55.538390   13681 status.go:343] host is not running, skipping remaining checks
	I0415 05:00:55.538393   13681 status.go:257] ha-771000 status: &{Name:ha-771000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (34.608625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (54.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-771000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-771000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-771000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-771000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-771000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-771000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-771000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-771000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (31.559833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-771000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-771000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-771000 -v=7 --alsologtostderr: (2.140422583s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-771000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-771000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.222830708s)

                                                
                                                
-- stdout --
	* [ha-771000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-771000" primary control-plane node in "ha-771000" cluster
	* Restarting existing qemu2 VM for "ha-771000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-771000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:00:57.915952   13705 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:00:57.916118   13705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:57.916124   13705 out.go:304] Setting ErrFile to fd 2...
	I0415 05:00:57.916127   13705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:00:57.916289   13705 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:00:57.917475   13705 out.go:298] Setting JSON to false
	I0415 05:00:57.936172   13705 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7228,"bootTime":1713175229,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:00:57.936235   13705 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:00:57.941578   13705 out.go:177] * [ha-771000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:00:57.948444   13705 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:00:57.952282   13705 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:00:57.948521   13705 notify.go:220] Checking for updates...
	I0415 05:00:57.955412   13705 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:00:57.958464   13705 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:00:57.961459   13705 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:00:57.964487   13705 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:00:57.967774   13705 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:00:57.967833   13705 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:00:57.972436   13705 out.go:177] * Using the qemu2 driver based on existing profile
	I0415 05:00:57.979440   13705 start.go:297] selected driver: qemu2
	I0415 05:00:57.979449   13705 start.go:901] validating driver "qemu2" against &{Name:ha-771000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.29.3 ClusterName:ha-771000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:00:57.979504   13705 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:00:57.981779   13705 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:00:57.981830   13705 cni.go:84] Creating CNI manager for ""
	I0415 05:00:57.981838   13705 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 05:00:57.981888   13705 start.go:340] cluster config:
	{Name:ha-771000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-771000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:00:57.986422   13705 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:00:57.993382   13705 out.go:177] * Starting "ha-771000" primary control-plane node in "ha-771000" cluster
	I0415 05:00:57.997417   13705 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:00:57.997431   13705 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 05:00:57.997445   13705 cache.go:56] Caching tarball of preloaded images
	I0415 05:00:57.997497   13705 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:00:57.997502   13705 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:00:57.997552   13705 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/ha-771000/config.json ...
	I0415 05:00:57.998011   13705 start.go:360] acquireMachinesLock for ha-771000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:00:57.998043   13705 start.go:364] duration metric: took 26.583µs to acquireMachinesLock for "ha-771000"
	I0415 05:00:57.998051   13705 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:00:57.998058   13705 fix.go:54] fixHost starting: 
	I0415 05:00:57.998170   13705 fix.go:112] recreateIfNeeded on ha-771000: state=Stopped err=<nil>
	W0415 05:00:57.998178   13705 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:00:58.005376   13705 out.go:177] * Restarting existing qemu2 VM for "ha-771000" ...
	I0415 05:00:58.009309   13705 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:2d:88:bb:a9:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/disk.qcow2
	I0415 05:00:58.011359   13705 main.go:141] libmachine: STDOUT: 
	I0415 05:00:58.011378   13705 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:00:58.011402   13705 fix.go:56] duration metric: took 13.345042ms for fixHost
	I0415 05:00:58.011408   13705 start.go:83] releasing machines lock for "ha-771000", held for 13.360834ms
	W0415 05:00:58.011414   13705 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:00:58.011445   13705 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:00:58.011450   13705 start.go:728] Will try again in 5 seconds ...
	I0415 05:01:03.013404   13705 start.go:360] acquireMachinesLock for ha-771000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:01:03.013739   13705 start.go:364] duration metric: took 245.75µs to acquireMachinesLock for "ha-771000"
	I0415 05:01:03.013839   13705 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:01:03.013854   13705 fix.go:54] fixHost starting: 
	I0415 05:01:03.014414   13705 fix.go:112] recreateIfNeeded on ha-771000: state=Stopped err=<nil>
	W0415 05:01:03.014436   13705 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:01:03.018783   13705 out.go:177] * Restarting existing qemu2 VM for "ha-771000" ...
	I0415 05:01:03.028092   13705 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:2d:88:bb:a9:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/disk.qcow2
	I0415 05:01:03.037714   13705 main.go:141] libmachine: STDOUT: 
	I0415 05:01:03.037778   13705 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:01:03.037840   13705 fix.go:56] duration metric: took 23.984041ms for fixHost
	I0415 05:01:03.037859   13705 start.go:83] releasing machines lock for "ha-771000", held for 24.098417ms
	W0415 05:01:03.038027   13705 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-771000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-771000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:01:03.045904   13705 out.go:177] 
	W0415 05:01:03.049947   13705 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:01:03.049972   13705 out.go:239] * 
	* 
	W0415 05:01:03.052598   13705 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:01:03.059815   13705 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-771000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-771000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (33.815083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-771000 node delete m03 -v=7 --alsologtostderr: exit status 83 (43.54875ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-771000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-771000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:01:03.210788   13717 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:01:03.211185   13717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:01:03.211197   13717 out.go:304] Setting ErrFile to fd 2...
	I0415 05:01:03.211199   13717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:01:03.211363   13717 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:01:03.211585   13717 mustload.go:65] Loading cluster: ha-771000
	I0415 05:01:03.211777   13717 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:01:03.215627   13717 out.go:177] * The control-plane node ha-771000 host is not running: state=Stopped
	I0415 05:01:03.219288   13717 out.go:177]   To start a cluster, run: "minikube start -p ha-771000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-771000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr: exit status 7 (32.148541ms)

                                                
                                                
-- stdout --
	ha-771000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:01:03.254796   13719 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:01:03.254937   13719 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:01:03.254940   13719 out.go:304] Setting ErrFile to fd 2...
	I0415 05:01:03.254942   13719 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:01:03.255068   13719 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:01:03.255177   13719 out.go:298] Setting JSON to false
	I0415 05:01:03.255186   13719 mustload.go:65] Loading cluster: ha-771000
	I0415 05:01:03.255252   13719 notify.go:220] Checking for updates...
	I0415 05:01:03.255386   13719 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:01:03.255391   13719 status.go:255] checking status of ha-771000 ...
	I0415 05:01:03.255590   13719 status.go:330] ha-771000 host status = "Stopped" (err=<nil>)
	I0415 05:01:03.255594   13719 status.go:343] host is not running, skipping remaining checks
	I0415 05:01:03.255596   13719 status.go:257] ha-771000 status: &{Name:ha-771000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (32.144333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-771000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-771000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-771000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-771000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (31.631458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-771000 stop -v=7 --alsologtostderr: (3.744214209s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr: exit status 7 (69.505875ms)

                                                
                                                
-- stdout --
	ha-771000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:01:07.202625   13749 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:01:07.202831   13749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:01:07.202834   13749 out.go:304] Setting ErrFile to fd 2...
	I0415 05:01:07.202837   13749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:01:07.202990   13749 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:01:07.203132   13749 out.go:298] Setting JSON to false
	I0415 05:01:07.203148   13749 mustload.go:65] Loading cluster: ha-771000
	I0415 05:01:07.203187   13749 notify.go:220] Checking for updates...
	I0415 05:01:07.203371   13749 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:01:07.203378   13749 status.go:255] checking status of ha-771000 ...
	I0415 05:01:07.203619   13749 status.go:330] ha-771000 host status = "Stopped" (err=<nil>)
	I0415 05:01:07.203623   13749 status.go:343] host is not running, skipping remaining checks
	I0415 05:01:07.203626   13749 status.go:257] ha-771000 status: &{Name:ha-771000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr": ha-771000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr": ha-771000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr": ha-771000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (33.223875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-771000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-771000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.183785833s)

                                                
                                                
-- stdout --
	* [ha-771000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-771000" primary control-plane node in "ha-771000" cluster
	* Restarting existing qemu2 VM for "ha-771000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-771000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:01:07.267875   13753 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:01:07.268005   13753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:01:07.268008   13753 out.go:304] Setting ErrFile to fd 2...
	I0415 05:01:07.268010   13753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:01:07.268141   13753 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:01:07.269095   13753 out.go:298] Setting JSON to false
	I0415 05:01:07.285015   13753 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7238,"bootTime":1713175229,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:01:07.285085   13753 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:01:07.289675   13753 out.go:177] * [ha-771000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:01:07.296621   13753 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:01:07.300598   13753 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:01:07.296663   13753 notify.go:220] Checking for updates...
	I0415 05:01:07.304584   13753 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:01:07.307610   13753 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:01:07.310599   13753 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:01:07.313593   13753 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:01:07.316898   13753 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:01:07.317164   13753 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:01:07.321602   13753 out.go:177] * Using the qemu2 driver based on existing profile
	I0415 05:01:07.328595   13753 start.go:297] selected driver: qemu2
	I0415 05:01:07.328605   13753 start.go:901] validating driver "qemu2" against &{Name:ha-771000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.29.3 ClusterName:ha-771000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:01:07.328675   13753 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:01:07.330809   13753 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:01:07.330860   13753 cni.go:84] Creating CNI manager for ""
	I0415 05:01:07.330864   13753 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 05:01:07.330911   13753 start.go:340] cluster config:
	{Name:ha-771000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-771000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:01:07.335037   13753 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:01:07.342619   13753 out.go:177] * Starting "ha-771000" primary control-plane node in "ha-771000" cluster
	I0415 05:01:07.346578   13753 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:01:07.346598   13753 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 05:01:07.346607   13753 cache.go:56] Caching tarball of preloaded images
	I0415 05:01:07.346671   13753 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:01:07.346677   13753 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:01:07.346730   13753 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/ha-771000/config.json ...
	I0415 05:01:07.347223   13753 start.go:360] acquireMachinesLock for ha-771000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:01:07.347255   13753 start.go:364] duration metric: took 24.625µs to acquireMachinesLock for "ha-771000"
	I0415 05:01:07.347268   13753 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:01:07.347274   13753 fix.go:54] fixHost starting: 
	I0415 05:01:07.347398   13753 fix.go:112] recreateIfNeeded on ha-771000: state=Stopped err=<nil>
	W0415 05:01:07.347406   13753 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:01:07.351595   13753 out.go:177] * Restarting existing qemu2 VM for "ha-771000" ...
	I0415 05:01:07.358532   13753 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:2d:88:bb:a9:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/disk.qcow2
	I0415 05:01:07.360629   13753 main.go:141] libmachine: STDOUT: 
	I0415 05:01:07.360649   13753 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:01:07.360678   13753 fix.go:56] duration metric: took 13.405083ms for fixHost
	I0415 05:01:07.360682   13753 start.go:83] releasing machines lock for "ha-771000", held for 13.42375ms
	W0415 05:01:07.360689   13753 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:01:07.360723   13753 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:01:07.360728   13753 start.go:728] Will try again in 5 seconds ...
	I0415 05:01:12.362714   13753 start.go:360] acquireMachinesLock for ha-771000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:01:12.363192   13753 start.go:364] duration metric: took 338.208µs to acquireMachinesLock for "ha-771000"
	I0415 05:01:12.363364   13753 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:01:12.363388   13753 fix.go:54] fixHost starting: 
	I0415 05:01:12.364099   13753 fix.go:112] recreateIfNeeded on ha-771000: state=Stopped err=<nil>
	W0415 05:01:12.364124   13753 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:01:12.372327   13753 out.go:177] * Restarting existing qemu2 VM for "ha-771000" ...
	I0415 05:01:12.376663   13753 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:2d:88:bb:a9:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/ha-771000/disk.qcow2
	I0415 05:01:12.385605   13753 main.go:141] libmachine: STDOUT: 
	I0415 05:01:12.385675   13753 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:01:12.385785   13753 fix.go:56] duration metric: took 22.399709ms for fixHost
	I0415 05:01:12.385805   13753 start.go:83] releasing machines lock for "ha-771000", held for 22.560667ms
	W0415 05:01:12.386109   13753 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-771000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-771000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:01:12.393401   13753 out.go:177] 
	W0415 05:01:12.397701   13753 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:01:12.397734   13753 out.go:239] * 
	* 
	W0415 05:01:12.400482   13753 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:01:12.407496   13753 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-771000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (70.00475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-771000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-771000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-771000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-771000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (31.767666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-771000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-771000 --control-plane -v=7 --alsologtostderr: exit status 83 (42.737ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-771000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-771000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:01:12.629817   13769 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:01:12.629972   13769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:01:12.629975   13769 out.go:304] Setting ErrFile to fd 2...
	I0415 05:01:12.629978   13769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:01:12.630127   13769 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:01:12.630380   13769 mustload.go:65] Loading cluster: ha-771000
	I0415 05:01:12.630576   13769 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:01:12.633569   13769 out.go:177] * The control-plane node ha-771000 host is not running: state=Stopped
	I0415 05:01:12.637504   13769 out.go:177]   To start a cluster, run: "minikube start -p ha-771000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-771000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (31.938917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-771000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-771000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-771000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-771000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-771000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-771000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-771000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-771000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (31.504042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.10s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-170000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-170000 --driver=qemu2 : exit status 80 (9.928624458s)

                                                
                                                
-- stdout --
	* [image-170000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-170000" primary control-plane node in "image-170000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-170000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-170000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-170000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-170000 -n image-170000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-170000 -n image-170000: exit status 7 (69.811459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-170000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.00s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-002000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-002000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.910238834s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9b07fe50-8c0d-4cdc-928e-7d7f34a9a77a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-002000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"855205f5-8540-4688-9bf2-2dd0f4b6e45e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18644"}}
	{"specversion":"1.0","id":"0ffbb296-46dc-4109-9fe0-6e3382a34f06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig"}}
	{"specversion":"1.0","id":"faa2c2e8-4bb6-476a-a5c3-a16b8a4e566b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"062cbc52-ac58-48c7-9f59-2c6f412a379f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d13821ff-a062-4280-a2b2-d2d547de61fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube"}}
	{"specversion":"1.0","id":"a522dc9b-c0c9-4f3e-a37c-768fecf1618f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"66c1647b-78ed-4bac-886d-8dd8b00f90a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5060843b-db5b-4784-be89-4e34d398c622","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"b5a61238-86fe-454b-94b2-41ec4dd143b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-002000\" primary control-plane node in \"json-output-002000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"61ac65ae-4842-4b9a-b682-5c20bcd3d007","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"cb32dd73-8ac8-44be-8303-8cfb96cbcfd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-002000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"ed015518-6509-413d-9eac-04cb52783258","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"927a4789-bf4b-470a-badb-00dc2120fc10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"c4478f1d-1a06-438b-96f4-8ec881501fe4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-002000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"dc646983-4c7a-47c1-8703-91a521d10258","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"3a53deb9-bf2c-4f53-bf63-8ae0a8d807ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-002000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.91s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-002000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-002000 --output=json --user=testUser: exit status 83 (79.27475ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"86d901fd-f5b1-4648-beed-0031971cd710","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-002000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"07c5b09c-d8c1-4b8e-bea9-49107de6c424","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-002000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-002000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-002000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-002000 --output=json --user=testUser: exit status 83 (48.147875ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-002000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-002000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-002000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-002000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.29s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-927000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-927000 --driver=qemu2 : exit status 80 (9.852050375s)

                                                
                                                
-- stdout --
	* [first-927000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-927000" primary control-plane node in "first-927000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-927000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-927000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-927000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-04-15 05:01:46.787377 -0700 PDT m=+509.753803126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-935000 -n second-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-935000 -n second-935000: exit status 85 (80.95525ms)

                                                
                                                
-- stdout --
	* Profile "second-935000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-935000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-935000" host is not running, skipping log retrieval (state="* Profile \"second-935000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-935000\"")
helpers_test.go:175: Cleaning up "second-935000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-935000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-04-15 05:01:47.099855 -0700 PDT m=+510.066294042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-927000 -n first-927000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-927000 -n first-927000: exit status 7 (31.824166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-927000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-927000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-927000
--- FAIL: TestMinikubeProfile (10.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-005000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-005000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.943512167s)

                                                
                                                
-- stdout --
	* [mount-start-1-005000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-005000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-005000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-005000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-005000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-005000 -n mount-start-1-005000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-005000 -n mount-start-1-005000: exit status 7 (70.074292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-005000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-671000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-671000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.810687917s)

                                                
                                                
-- stdout --
	* [multinode-671000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-671000" primary control-plane node in "multinode-671000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-671000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:01:57.600437   13938 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:01:57.600579   13938 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:01:57.600583   13938 out.go:304] Setting ErrFile to fd 2...
	I0415 05:01:57.600593   13938 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:01:57.600740   13938 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:01:57.601770   13938 out.go:298] Setting JSON to false
	I0415 05:01:57.617728   13938 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7288,"bootTime":1713175229,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:01:57.617789   13938 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:01:57.623846   13938 out.go:177] * [multinode-671000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:01:57.631714   13938 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:01:57.631764   13938 notify.go:220] Checking for updates...
	I0415 05:01:57.638716   13938 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:01:57.641816   13938 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:01:57.644736   13938 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:01:57.650155   13938 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:01:57.652714   13938 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:01:57.655883   13938 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:01:57.659581   13938 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 05:01:57.666750   13938 start.go:297] selected driver: qemu2
	I0415 05:01:57.666759   13938 start.go:901] validating driver "qemu2" against <nil>
	I0415 05:01:57.666767   13938 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:01:57.669143   13938 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 05:01:57.670690   13938 out.go:177] * Automatically selected the socket_vmnet network
	I0415 05:01:57.673763   13938 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:01:57.673806   13938 cni.go:84] Creating CNI manager for ""
	I0415 05:01:57.673811   13938 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0415 05:01:57.673817   13938 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0415 05:01:57.673863   13938 start.go:340] cluster config:
	{Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:01:57.678243   13938 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:01:57.685631   13938 out.go:177] * Starting "multinode-671000" primary control-plane node in "multinode-671000" cluster
	I0415 05:01:57.689685   13938 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:01:57.689716   13938 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 05:01:57.689723   13938 cache.go:56] Caching tarball of preloaded images
	I0415 05:01:57.689784   13938 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:01:57.689789   13938 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:01:57.689994   13938 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/multinode-671000/config.json ...
	I0415 05:01:57.690006   13938 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/multinode-671000/config.json: {Name:mke27b32a6d265b950cb260b83edb06ee826b81f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:01:57.690416   13938 start.go:360] acquireMachinesLock for multinode-671000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:01:57.690449   13938 start.go:364] duration metric: took 28.042µs to acquireMachinesLock for "multinode-671000"
	I0415 05:01:57.690460   13938 start.go:93] Provisioning new machine with config: &{Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:01:57.690484   13938 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:01:57.697691   13938 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 05:01:57.715519   13938 start.go:159] libmachine.API.Create for "multinode-671000" (driver="qemu2")
	I0415 05:01:57.715570   13938 client.go:168] LocalClient.Create starting
	I0415 05:01:57.715634   13938 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:01:57.715663   13938 main.go:141] libmachine: Decoding PEM data...
	I0415 05:01:57.715682   13938 main.go:141] libmachine: Parsing certificate...
	I0415 05:01:57.715722   13938 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:01:57.715747   13938 main.go:141] libmachine: Decoding PEM data...
	I0415 05:01:57.715759   13938 main.go:141] libmachine: Parsing certificate...
	I0415 05:01:57.716271   13938 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:01:57.837284   13938 main.go:141] libmachine: Creating SSH key...
	I0415 05:01:57.985105   13938 main.go:141] libmachine: Creating Disk image...
	I0415 05:01:57.985112   13938 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:01:57.985323   13938 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/disk.qcow2
	I0415 05:01:57.998063   13938 main.go:141] libmachine: STDOUT: 
	I0415 05:01:57.998084   13938 main.go:141] libmachine: STDERR: 
	I0415 05:01:57.998137   13938 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/disk.qcow2 +20000M
	I0415 05:01:58.009035   13938 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:01:58.009051   13938 main.go:141] libmachine: STDERR: 
	I0415 05:01:58.009066   13938 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/disk.qcow2
	I0415 05:01:58.009079   13938 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:01:58.009112   13938 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:5f:a2:17:c7:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/disk.qcow2
	I0415 05:01:58.010786   13938 main.go:141] libmachine: STDOUT: 
	I0415 05:01:58.010802   13938 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:01:58.010821   13938 client.go:171] duration metric: took 295.258ms to LocalClient.Create
	I0415 05:02:00.012940   13938 start.go:128] duration metric: took 2.322526208s to createHost
	I0415 05:02:00.013026   13938 start.go:83] releasing machines lock for "multinode-671000", held for 2.322664292s
	W0415 05:02:00.013097   13938 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:02:00.020672   13938 out.go:177] * Deleting "multinode-671000" in qemu2 ...
	W0415 05:02:00.039378   13938 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:02:00.039458   13938 start.go:728] Will try again in 5 seconds ...
	I0415 05:02:05.041439   13938 start.go:360] acquireMachinesLock for multinode-671000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:02:05.041935   13938 start.go:364] duration metric: took 397.875µs to acquireMachinesLock for "multinode-671000"
	I0415 05:02:05.042065   13938 start.go:93] Provisioning new machine with config: &{Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:02:05.042440   13938 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:02:05.051021   13938 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 05:02:05.102339   13938 start.go:159] libmachine.API.Create for "multinode-671000" (driver="qemu2")
	I0415 05:02:05.102390   13938 client.go:168] LocalClient.Create starting
	I0415 05:02:05.102510   13938 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:02:05.102584   13938 main.go:141] libmachine: Decoding PEM data...
	I0415 05:02:05.102599   13938 main.go:141] libmachine: Parsing certificate...
	I0415 05:02:05.102656   13938 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:02:05.102701   13938 main.go:141] libmachine: Decoding PEM data...
	I0415 05:02:05.102715   13938 main.go:141] libmachine: Parsing certificate...
	I0415 05:02:05.103285   13938 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:02:05.240886   13938 main.go:141] libmachine: Creating SSH key...
	I0415 05:02:05.308983   13938 main.go:141] libmachine: Creating Disk image...
	I0415 05:02:05.308988   13938 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:02:05.309177   13938 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/disk.qcow2
	I0415 05:02:05.321438   13938 main.go:141] libmachine: STDOUT: 
	I0415 05:02:05.321459   13938 main.go:141] libmachine: STDERR: 
	I0415 05:02:05.321532   13938 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/disk.qcow2 +20000M
	I0415 05:02:05.332440   13938 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:02:05.332457   13938 main.go:141] libmachine: STDERR: 
	I0415 05:02:05.332468   13938 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/disk.qcow2
	I0415 05:02:05.332472   13938 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:02:05.332519   13938 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:07:54:0b:93:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/disk.qcow2
	I0415 05:02:05.334156   13938 main.go:141] libmachine: STDOUT: 
	I0415 05:02:05.334174   13938 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:02:05.334186   13938 client.go:171] duration metric: took 231.80025ms to LocalClient.Create
	I0415 05:02:07.336269   13938 start.go:128] duration metric: took 2.293893875s to createHost
	I0415 05:02:07.336327   13938 start.go:83] releasing machines lock for "multinode-671000", held for 2.294465125s
	W0415 05:02:07.336688   13938 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-671000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-671000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:02:07.348435   13938 out.go:177] 
	W0415 05:02:07.352422   13938 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:02:07.352447   13938 out.go:239] * 
	* 
	W0415 05:02:07.355069   13938 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:02:07.364182   13938 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-671000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000: exit status 7 (69.185042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (90.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-671000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-671000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (60.745333ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-671000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-671000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-671000 -- rollout status deployment/busybox: exit status 1 (58.660792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-671000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (59.105459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-671000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.321542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-671000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.479125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-671000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.331417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-671000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.868959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-671000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.455167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-671000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.318416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-671000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.168ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-671000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.074875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-671000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.419625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-671000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.790708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-671000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-671000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-671000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.33825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-671000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-671000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-671000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.730333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-671000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-671000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-671000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.237708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-671000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000: exit status 7 (31.670791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (90.18s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (59.089292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-671000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000: exit status 7 (32.204958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-671000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-671000 -v 3 --alsologtostderr: exit status 83 (44.805ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-671000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-671000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:03:37.750124   14033 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:03:37.750299   14033 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:03:37.750302   14033 out.go:304] Setting ErrFile to fd 2...
	I0415 05:03:37.750304   14033 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:03:37.750434   14033 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:03:37.750675   14033 mustload.go:65] Loading cluster: multinode-671000
	I0415 05:03:37.750851   14033 config.go:182] Loaded profile config "multinode-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:03:37.755944   14033 out.go:177] * The control-plane node multinode-671000 host is not running: state=Stopped
	I0415 05:03:37.759741   14033 out.go:177]   To start a cluster, run: "minikube start -p multinode-671000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-671000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000: exit status 7 (31.414042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-671000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-671000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.7975ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-671000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-671000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-671000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000: exit status 7 (32.658625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-671000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-671000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-671000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"multinode-671000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000: exit status 7 (31.921083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-671000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-671000 status --output json --alsologtostderr: exit status 7 (31.983667ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-671000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:03:37.992946   14046 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:03:37.993086   14046 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:03:37.993089   14046 out.go:304] Setting ErrFile to fd 2...
	I0415 05:03:37.993092   14046 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:03:37.993218   14046 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:03:37.993348   14046 out.go:298] Setting JSON to true
	I0415 05:03:37.993358   14046 mustload.go:65] Loading cluster: multinode-671000
	I0415 05:03:37.993415   14046 notify.go:220] Checking for updates...
	I0415 05:03:37.993575   14046 config.go:182] Loaded profile config "multinode-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:03:37.993580   14046 status.go:255] checking status of multinode-671000 ...
	I0415 05:03:37.993798   14046 status.go:330] multinode-671000 host status = "Stopped" (err=<nil>)
	I0415 05:03:37.993802   14046 status.go:343] host is not running, skipping remaining checks
	I0415 05:03:37.993804   14046 status.go:257] multinode-671000 status: &{Name:multinode-671000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-671000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000: exit status 7 (31.842916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-671000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-671000 node stop m03: exit status 85 (48.334333ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-671000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-671000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-671000 status: exit status 7 (31.827375ms)

                                                
                                                
-- stdout --
	multinode-671000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-671000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-671000 status --alsologtostderr: exit status 7 (31.673167ms)

                                                
                                                
-- stdout --
	multinode-671000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:03:38.137451   14054 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:03:38.137601   14054 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:03:38.137604   14054 out.go:304] Setting ErrFile to fd 2...
	I0415 05:03:38.137606   14054 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:03:38.137739   14054 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:03:38.137849   14054 out.go:298] Setting JSON to false
	I0415 05:03:38.137859   14054 mustload.go:65] Loading cluster: multinode-671000
	I0415 05:03:38.137919   14054 notify.go:220] Checking for updates...
	I0415 05:03:38.138076   14054 config.go:182] Loaded profile config "multinode-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:03:38.138082   14054 status.go:255] checking status of multinode-671000 ...
	I0415 05:03:38.138287   14054 status.go:330] multinode-671000 host status = "Stopped" (err=<nil>)
	I0415 05:03:38.138290   14054 status.go:343] host is not running, skipping remaining checks
	I0415 05:03:38.138293   14054 status.go:257] multinode-671000 status: &{Name:multinode-671000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-671000 status --alsologtostderr": multinode-671000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000: exit status 7 (32.048ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (56.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-671000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-671000 node start m03 -v=7 --alsologtostderr: exit status 85 (50.986291ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:03:38.201960   14058 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:03:38.202369   14058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:03:38.202372   14058 out.go:304] Setting ErrFile to fd 2...
	I0415 05:03:38.202374   14058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:03:38.202524   14058 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:03:38.202754   14058 mustload.go:65] Loading cluster: multinode-671000
	I0415 05:03:38.202959   14058 config.go:182] Loaded profile config "multinode-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:03:38.207666   14058 out.go:177] 
	W0415 05:03:38.211441   14058 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0415 05:03:38.211445   14058 out.go:239] * 
	* 
	W0415 05:03:38.213465   14058 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:03:38.217633   14058 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0415 05:03:38.201960   14058 out.go:291] Setting OutFile to fd 1 ...
I0415 05:03:38.202369   14058 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 05:03:38.202372   14058 out.go:304] Setting ErrFile to fd 2...
I0415 05:03:38.202374   14058 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 05:03:38.202524   14058 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
I0415 05:03:38.202754   14058 mustload.go:65] Loading cluster: multinode-671000
I0415 05:03:38.202959   14058 config.go:182] Loaded profile config "multinode-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 05:03:38.207666   14058 out.go:177] 
W0415 05:03:38.211441   14058 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0415 05:03:38.211445   14058 out.go:239] * 
* 
W0415 05:03:38.213465   14058 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0415 05:03:38.217633   14058 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-671000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-671000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-671000 status -v=7 --alsologtostderr: exit status 7 (31.44025ms)

                                                
                                                
-- stdout --
	multinode-671000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:03:38.252461   14060 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:03:38.252581   14060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:03:38.252584   14060 out.go:304] Setting ErrFile to fd 2...
	I0415 05:03:38.252590   14060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:03:38.252714   14060 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:03:38.252835   14060 out.go:298] Setting JSON to false
	I0415 05:03:38.252846   14060 mustload.go:65] Loading cluster: multinode-671000
	I0415 05:03:38.252898   14060 notify.go:220] Checking for updates...
	I0415 05:03:38.253051   14060 config.go:182] Loaded profile config "multinode-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:03:38.253057   14060 status.go:255] checking status of multinode-671000 ...
	I0415 05:03:38.253251   14060 status.go:330] multinode-671000 host status = "Stopped" (err=<nil>)
	I0415 05:03:38.253255   14060 status.go:343] host is not running, skipping remaining checks
	I0415 05:03:38.253257   14060 status.go:257] multinode-671000 status: &{Name:multinode-671000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-671000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-671000 status -v=7 --alsologtostderr: exit status 7 (78.865334ms)

                                                
                                                
-- stdout --
	multinode-671000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:03:39.688712   14062 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:03:39.688909   14062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:03:39.688914   14062 out.go:304] Setting ErrFile to fd 2...
	I0415 05:03:39.688917   14062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:03:39.689088   14062 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:03:39.689281   14062 out.go:298] Setting JSON to false
	I0415 05:03:39.689295   14062 mustload.go:65] Loading cluster: multinode-671000
	I0415 05:03:39.689343   14062 notify.go:220] Checking for updates...
	I0415 05:03:39.689566   14062 config.go:182] Loaded profile config "multinode-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:03:39.689574   14062 status.go:255] checking status of multinode-671000 ...
	I0415 05:03:39.689855   14062 status.go:330] multinode-671000 host status = "Stopped" (err=<nil>)
	I0415 05:03:39.689860   14062 status.go:343] host is not running, skipping remaining checks
	I0415 05:03:39.689863   14062 status.go:257] multinode-671000 status: &{Name:multinode-671000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-671000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-671000 status -v=7 --alsologtostderr: exit status 7 (77.020041ms)

                                                
                                                
-- stdout --
	multinode-671000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:03:41.445575   14064 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:03:41.445819   14064 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:03:41.445823   14064 out.go:304] Setting ErrFile to fd 2...
	I0415 05:03:41.445826   14064 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:03:41.445981   14064 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:03:41.446165   14064 out.go:298] Setting JSON to false
	I0415 05:03:41.446178   14064 mustload.go:65] Loading cluster: multinode-671000
	I0415 05:03:41.446219   14064 notify.go:220] Checking for updates...
	I0415 05:03:41.446469   14064 config.go:182] Loaded profile config "multinode-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:03:41.446476   14064 status.go:255] checking status of multinode-671000 ...
	I0415 05:03:41.446736   14064 status.go:330] multinode-671000 host status = "Stopped" (err=<nil>)
	I0415 05:03:41.446741   14064 status.go:343] host is not running, skipping remaining checks
	I0415 05:03:41.446744   14064 status.go:257] multinode-671000 status: &{Name:multinode-671000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-671000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-671000 status -v=7 --alsologtostderr: exit status 7 (73.49175ms)

                                                
                                                
-- stdout --
	multinode-671000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:03:42.820377   14066 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:03:42.820567   14066 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:03:42.820571   14066 out.go:304] Setting ErrFile to fd 2...
	I0415 05:03:42.820574   14066 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:03:42.820735   14066 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:03:42.820922   14066 out.go:298] Setting JSON to false
	I0415 05:03:42.820935   14066 mustload.go:65] Loading cluster: multinode-671000
	I0415 05:03:42.820977   14066 notify.go:220] Checking for updates...
	I0415 05:03:42.821168   14066 config.go:182] Loaded profile config "multinode-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:03:42.821175   14066 status.go:255] checking status of multinode-671000 ...
	I0415 05:03:42.821440   14066 status.go:330] multinode-671000 host status = "Stopped" (err=<nil>)
	I0415 05:03:42.821445   14066 status.go:343] host is not running, skipping remaining checks
	I0415 05:03:42.821450   14066 status.go:257] multinode-671000 status: &{Name:multinode-671000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-671000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-671000 status -v=7 --alsologtostderr: exit status 7 (75.599667ms)

                                                
                                                
-- stdout --
	multinode-671000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:03:47.523646   14069 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:03:47.523830   14069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:03:47.523834   14069 out.go:304] Setting ErrFile to fd 2...
	I0415 05:03:47.523837   14069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:03:47.523990   14069 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:03:47.524139   14069 out.go:298] Setting JSON to false
	I0415 05:03:47.524152   14069 mustload.go:65] Loading cluster: multinode-671000
	I0415 05:03:47.524182   14069 notify.go:220] Checking for updates...
	I0415 05:03:47.524374   14069 config.go:182] Loaded profile config "multinode-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:03:47.524381   14069 status.go:255] checking status of multinode-671000 ...
	I0415 05:03:47.524625   14069 status.go:330] multinode-671000 host status = "Stopped" (err=<nil>)
	I0415 05:03:47.524629   14069 status.go:343] host is not running, skipping remaining checks
	I0415 05:03:47.524632   14069 status.go:257] multinode-671000 status: &{Name:multinode-671000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-671000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-671000 status -v=7 --alsologtostderr: exit status 7 (76.768833ms)

                                                
                                                
-- stdout --
	multinode-671000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:03:52.731710   14071 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:03:52.731896   14071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:03:52.731900   14071 out.go:304] Setting ErrFile to fd 2...
	I0415 05:03:52.731903   14071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:03:52.732048   14071 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:03:52.732208   14071 out.go:298] Setting JSON to false
	I0415 05:03:52.732221   14071 mustload.go:65] Loading cluster: multinode-671000
	I0415 05:03:52.732265   14071 notify.go:220] Checking for updates...
	I0415 05:03:52.732503   14071 config.go:182] Loaded profile config "multinode-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:03:52.732511   14071 status.go:255] checking status of multinode-671000 ...
	I0415 05:03:52.732764   14071 status.go:330] multinode-671000 host status = "Stopped" (err=<nil>)
	I0415 05:03:52.732769   14071 status.go:343] host is not running, skipping remaining checks
	I0415 05:03:52.732771   14071 status.go:257] multinode-671000 status: &{Name:multinode-671000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-671000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-671000 status -v=7 --alsologtostderr: exit status 7 (76.284125ms)

                                                
                                                
-- stdout --
	multinode-671000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:04:03.303517   14073 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:04:03.303703   14073 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:04:03.303708   14073 out.go:304] Setting ErrFile to fd 2...
	I0415 05:04:03.303711   14073 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:04:03.303861   14073 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:04:03.304001   14073 out.go:298] Setting JSON to false
	I0415 05:04:03.304013   14073 mustload.go:65] Loading cluster: multinode-671000
	I0415 05:04:03.304054   14073 notify.go:220] Checking for updates...
	I0415 05:04:03.304259   14073 config.go:182] Loaded profile config "multinode-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:04:03.304266   14073 status.go:255] checking status of multinode-671000 ...
	I0415 05:04:03.304516   14073 status.go:330] multinode-671000 host status = "Stopped" (err=<nil>)
	I0415 05:04:03.304520   14073 status.go:343] host is not running, skipping remaining checks
	I0415 05:04:03.304523   14073 status.go:257] multinode-671000 status: &{Name:multinode-671000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-671000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-671000 status -v=7 --alsologtostderr: exit status 7 (75.730375ms)

                                                
                                                
-- stdout --
	multinode-671000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:04:14.913799   14080 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:04:14.913983   14080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:04:14.913988   14080 out.go:304] Setting ErrFile to fd 2...
	I0415 05:04:14.913991   14080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:04:14.914140   14080 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:04:14.914291   14080 out.go:298] Setting JSON to false
	I0415 05:04:14.914304   14080 mustload.go:65] Loading cluster: multinode-671000
	I0415 05:04:14.914349   14080 notify.go:220] Checking for updates...
	I0415 05:04:14.914553   14080 config.go:182] Loaded profile config "multinode-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:04:14.914560   14080 status.go:255] checking status of multinode-671000 ...
	I0415 05:04:14.914819   14080 status.go:330] multinode-671000 host status = "Stopped" (err=<nil>)
	I0415 05:04:14.914824   14080 status.go:343] host is not running, skipping remaining checks
	I0415 05:04:14.914827   14080 status.go:257] multinode-671000 status: &{Name:multinode-671000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-671000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-671000 status -v=7 --alsologtostderr: exit status 7 (75.035291ms)

                                                
                                                
-- stdout --
	multinode-671000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:04:34.672266   14082 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:04:34.672473   14082 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:04:34.672478   14082 out.go:304] Setting ErrFile to fd 2...
	I0415 05:04:34.672481   14082 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:04:34.672653   14082 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:04:34.672813   14082 out.go:298] Setting JSON to false
	I0415 05:04:34.672826   14082 mustload.go:65] Loading cluster: multinode-671000
	I0415 05:04:34.672861   14082 notify.go:220] Checking for updates...
	I0415 05:04:34.673089   14082 config.go:182] Loaded profile config "multinode-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:04:34.673096   14082 status.go:255] checking status of multinode-671000 ...
	I0415 05:04:34.673381   14082 status.go:330] multinode-671000 host status = "Stopped" (err=<nil>)
	I0415 05:04:34.673386   14082 status.go:343] host is not running, skipping remaining checks
	I0415 05:04:34.673389   14082 status.go:257] multinode-671000 status: &{Name:multinode-671000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-671000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000: exit status 7 (34.286958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (56.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-671000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-671000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-671000: (3.665164375s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-671000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-671000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.221097291s)

                                                
                                                
-- stdout --
	* [multinode-671000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-671000" primary control-plane node in "multinode-671000" cluster
	* Restarting existing qemu2 VM for "multinode-671000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-671000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:04:38.474555   14113 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:04:38.474724   14113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:04:38.474730   14113 out.go:304] Setting ErrFile to fd 2...
	I0415 05:04:38.474732   14113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:04:38.474914   14113 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:04:38.476053   14113 out.go:298] Setting JSON to false
	I0415 05:04:38.494974   14113 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7449,"bootTime":1713175229,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:04:38.495037   14113 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:04:38.500091   14113 out.go:177] * [multinode-671000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:04:38.504617   14113 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:04:38.504649   14113 notify.go:220] Checking for updates...
	I0415 05:04:38.508991   14113 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:04:38.512012   14113 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:04:38.513179   14113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:04:38.515975   14113 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:04:38.519000   14113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:04:38.522394   14113 config.go:182] Loaded profile config "multinode-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:04:38.522453   14113 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:04:38.526966   14113 out.go:177] * Using the qemu2 driver based on existing profile
	I0415 05:04:38.533996   14113 start.go:297] selected driver: qemu2
	I0415 05:04:38.534003   14113 start.go:901] validating driver "qemu2" against &{Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:04:38.534064   14113 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:04:38.536261   14113 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:04:38.536305   14113 cni.go:84] Creating CNI manager for ""
	I0415 05:04:38.536309   14113 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 05:04:38.536359   14113 start.go:340] cluster config:
	{Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-671000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:04:38.540485   14113 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:04:38.547996   14113 out.go:177] * Starting "multinode-671000" primary control-plane node in "multinode-671000" cluster
	I0415 05:04:38.551940   14113 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:04:38.551958   14113 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 05:04:38.551965   14113 cache.go:56] Caching tarball of preloaded images
	I0415 05:04:38.552030   14113 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:04:38.552036   14113 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:04:38.552106   14113 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/multinode-671000/config.json ...
	I0415 05:04:38.552570   14113 start.go:360] acquireMachinesLock for multinode-671000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:04:38.552605   14113 start.go:364] duration metric: took 28.333µs to acquireMachinesLock for "multinode-671000"
	I0415 05:04:38.552614   14113 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:04:38.552619   14113 fix.go:54] fixHost starting: 
	I0415 05:04:38.552730   14113 fix.go:112] recreateIfNeeded on multinode-671000: state=Stopped err=<nil>
	W0415 05:04:38.552738   14113 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:04:38.559908   14113 out.go:177] * Restarting existing qemu2 VM for "multinode-671000" ...
	I0415 05:04:38.564048   14113 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:07:54:0b:93:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/disk.qcow2
	I0415 05:04:38.566044   14113 main.go:141] libmachine: STDOUT: 
	I0415 05:04:38.566066   14113 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:04:38.566092   14113 fix.go:56] duration metric: took 13.472625ms for fixHost
	I0415 05:04:38.566096   14113 start.go:83] releasing machines lock for "multinode-671000", held for 13.48675ms
	W0415 05:04:38.566102   14113 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:04:38.566132   14113 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:04:38.566136   14113 start.go:728] Will try again in 5 seconds ...
	I0415 05:04:43.568079   14113 start.go:360] acquireMachinesLock for multinode-671000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:04:43.568434   14113 start.go:364] duration metric: took 289.625µs to acquireMachinesLock for "multinode-671000"
	I0415 05:04:43.568543   14113 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:04:43.568565   14113 fix.go:54] fixHost starting: 
	I0415 05:04:43.569236   14113 fix.go:112] recreateIfNeeded on multinode-671000: state=Stopped err=<nil>
	W0415 05:04:43.569265   14113 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:04:43.577724   14113 out.go:177] * Restarting existing qemu2 VM for "multinode-671000" ...
	I0415 05:04:43.582767   14113 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:07:54:0b:93:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/disk.qcow2
	I0415 05:04:43.591886   14113 main.go:141] libmachine: STDOUT: 
	I0415 05:04:43.591999   14113 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:04:43.592079   14113 fix.go:56] duration metric: took 23.517083ms for fixHost
	I0415 05:04:43.592096   14113 start.go:83] releasing machines lock for "multinode-671000", held for 23.643458ms
	W0415 05:04:43.592310   14113 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-671000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-671000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:04:43.598191   14113 out.go:177] 
	W0415 05:04:43.602794   14113 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:04:43.602817   14113 out.go:239] * 
	* 
	W0415 05:04:43.605431   14113 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:04:43.612715   14113 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-671000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-671000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000: exit status 7 (34.346458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-671000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-671000 node delete m03: exit status 83 (43.924125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-671000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-671000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-671000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-671000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-671000 status --alsologtostderr: exit status 7 (32.311958ms)

                                                
                                                
-- stdout --
	multinode-671000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:04:43.807175   14127 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:04:43.807334   14127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:04:43.807337   14127 out.go:304] Setting ErrFile to fd 2...
	I0415 05:04:43.807339   14127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:04:43.807460   14127 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:04:43.807582   14127 out.go:298] Setting JSON to false
	I0415 05:04:43.807592   14127 mustload.go:65] Loading cluster: multinode-671000
	I0415 05:04:43.807648   14127 notify.go:220] Checking for updates...
	I0415 05:04:43.807795   14127 config.go:182] Loaded profile config "multinode-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:04:43.807801   14127 status.go:255] checking status of multinode-671000 ...
	I0415 05:04:43.808009   14127 status.go:330] multinode-671000 host status = "Stopped" (err=<nil>)
	I0415 05:04:43.808012   14127 status.go:343] host is not running, skipping remaining checks
	I0415 05:04:43.808015   14127 status.go:257] multinode-671000 status: &{Name:multinode-671000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-671000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000: exit status 7 (31.7455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-671000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-671000 stop: (3.219892875s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-671000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-671000 status: exit status 7 (72.861792ms)

                                                
                                                
-- stdout --
	multinode-671000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-671000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-671000 status --alsologtostderr: exit status 7 (34.102ms)

                                                
                                                
-- stdout --
	multinode-671000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:04:47.166344   14151 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:04:47.166482   14151 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:04:47.166485   14151 out.go:304] Setting ErrFile to fd 2...
	I0415 05:04:47.166487   14151 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:04:47.166602   14151 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:04:47.166743   14151 out.go:298] Setting JSON to false
	I0415 05:04:47.166753   14151 mustload.go:65] Loading cluster: multinode-671000
	I0415 05:04:47.166813   14151 notify.go:220] Checking for updates...
	I0415 05:04:47.166961   14151 config.go:182] Loaded profile config "multinode-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:04:47.166967   14151 status.go:255] checking status of multinode-671000 ...
	I0415 05:04:47.167161   14151 status.go:330] multinode-671000 host status = "Stopped" (err=<nil>)
	I0415 05:04:47.167168   14151 status.go:343] host is not running, skipping remaining checks
	I0415 05:04:47.167171   14151 status.go:257] multinode-671000 status: &{Name:multinode-671000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-671000 status --alsologtostderr": multinode-671000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-671000 status --alsologtostderr": multinode-671000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000: exit status 7 (31.900583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-671000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-671000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.186975084s)

                                                
                                                
-- stdout --
	* [multinode-671000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-671000" primary control-plane node in "multinode-671000" cluster
	* Restarting existing qemu2 VM for "multinode-671000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-671000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:04:47.230291   14155 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:04:47.230413   14155 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:04:47.230419   14155 out.go:304] Setting ErrFile to fd 2...
	I0415 05:04:47.230422   14155 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:04:47.230550   14155 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:04:47.231543   14155 out.go:298] Setting JSON to false
	I0415 05:04:47.247702   14155 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7458,"bootTime":1713175229,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:04:47.247766   14155 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:04:47.252967   14155 out.go:177] * [multinode-671000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:04:47.259839   14155 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:04:47.259888   14155 notify.go:220] Checking for updates...
	I0415 05:04:47.266735   14155 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:04:47.269860   14155 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:04:47.272842   14155 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:04:47.275840   14155 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:04:47.278794   14155 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:04:47.282142   14155 config.go:182] Loaded profile config "multinode-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:04:47.282398   14155 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:04:47.285727   14155 out.go:177] * Using the qemu2 driver based on existing profile
	I0415 05:04:47.292825   14155 start.go:297] selected driver: qemu2
	I0415 05:04:47.292833   14155 start.go:901] validating driver "qemu2" against &{Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:04:47.292908   14155 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:04:47.295230   14155 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:04:47.295279   14155 cni.go:84] Creating CNI manager for ""
	I0415 05:04:47.295284   14155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 05:04:47.295329   14155 start.go:340] cluster config:
	{Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-671000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:04:47.299704   14155 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:04:47.306796   14155 out.go:177] * Starting "multinode-671000" primary control-plane node in "multinode-671000" cluster
	I0415 05:04:47.310807   14155 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:04:47.310824   14155 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 05:04:47.310830   14155 cache.go:56] Caching tarball of preloaded images
	I0415 05:04:47.310894   14155 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:04:47.310900   14155 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:04:47.310954   14155 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/multinode-671000/config.json ...
	I0415 05:04:47.311448   14155 start.go:360] acquireMachinesLock for multinode-671000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:04:47.311481   14155 start.go:364] duration metric: took 27.125µs to acquireMachinesLock for "multinode-671000"
	I0415 05:04:47.311490   14155 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:04:47.311496   14155 fix.go:54] fixHost starting: 
	I0415 05:04:47.311619   14155 fix.go:112] recreateIfNeeded on multinode-671000: state=Stopped err=<nil>
	W0415 05:04:47.311628   14155 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:04:47.315759   14155 out.go:177] * Restarting existing qemu2 VM for "multinode-671000" ...
	I0415 05:04:47.323816   14155 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:07:54:0b:93:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/disk.qcow2
	I0415 05:04:47.325948   14155 main.go:141] libmachine: STDOUT: 
	I0415 05:04:47.325970   14155 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:04:47.326001   14155 fix.go:56] duration metric: took 14.505167ms for fixHost
	I0415 05:04:47.326005   14155 start.go:83] releasing machines lock for "multinode-671000", held for 14.520208ms
	W0415 05:04:47.326012   14155 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:04:47.326042   14155 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:04:47.326047   14155 start.go:728] Will try again in 5 seconds ...
	I0415 05:04:52.328091   14155 start.go:360] acquireMachinesLock for multinode-671000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:04:52.328522   14155 start.go:364] duration metric: took 310.167µs to acquireMachinesLock for "multinode-671000"
	I0415 05:04:52.328654   14155 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:04:52.328675   14155 fix.go:54] fixHost starting: 
	I0415 05:04:52.329433   14155 fix.go:112] recreateIfNeeded on multinode-671000: state=Stopped err=<nil>
	W0415 05:04:52.329465   14155 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:04:52.337871   14155 out.go:177] * Restarting existing qemu2 VM for "multinode-671000" ...
	I0415 05:04:52.342117   14155 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:07:54:0b:93:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/multinode-671000/disk.qcow2
	I0415 05:04:52.351581   14155 main.go:141] libmachine: STDOUT: 
	I0415 05:04:52.351642   14155 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:04:52.351716   14155 fix.go:56] duration metric: took 23.04475ms for fixHost
	I0415 05:04:52.351729   14155 start.go:83] releasing machines lock for "multinode-671000", held for 23.18525ms
	W0415 05:04:52.351869   14155 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-671000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-671000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:04:52.358889   14155 out.go:177] 
	W0415 05:04:52.362948   14155 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:04:52.362977   14155 out.go:239] * 
	* 
	W0415 05:04:52.365624   14155 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:04:52.372861   14155 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-671000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000: exit status 7 (70.585583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-671000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-671000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-671000-m01 --driver=qemu2 : exit status 80 (9.841541875s)

                                                
                                                
-- stdout --
	* [multinode-671000-m01] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-671000-m01" primary control-plane node in "multinode-671000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-671000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-671000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-671000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-671000-m02 --driver=qemu2 : exit status 80 (10.029923292s)

                                                
                                                
-- stdout --
	* [multinode-671000-m02] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-671000-m02" primary control-plane node in "multinode-671000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-671000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-671000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-671000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-671000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-671000: exit status 83 (82.682625ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-671000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-671000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-671000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-671000 -n multinode-671000: exit status 7 (33.007334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.12s)

                                                
                                    
x
+
TestPreload (9.97s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-112000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-112000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.797061292s)

                                                
                                                
-- stdout --
	* [test-preload-112000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-112000" primary control-plane node in "test-preload-112000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-112000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:05:12.749225   14216 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:05:12.749358   14216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:05:12.749361   14216 out.go:304] Setting ErrFile to fd 2...
	I0415 05:05:12.749363   14216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:05:12.749477   14216 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:05:12.750507   14216 out.go:298] Setting JSON to false
	I0415 05:05:12.766531   14216 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7483,"bootTime":1713175229,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:05:12.766590   14216 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:05:12.771993   14216 out.go:177] * [test-preload-112000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:05:12.778756   14216 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:05:12.782888   14216 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:05:12.778827   14216 notify.go:220] Checking for updates...
	I0415 05:05:12.787322   14216 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:05:12.789912   14216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:05:12.792948   14216 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:05:12.795931   14216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:05:12.799198   14216 config.go:182] Loaded profile config "multinode-671000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:05:12.799255   14216 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:05:12.803904   14216 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 05:05:12.810876   14216 start.go:297] selected driver: qemu2
	I0415 05:05:12.810881   14216 start.go:901] validating driver "qemu2" against <nil>
	I0415 05:05:12.810886   14216 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:05:12.813157   14216 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 05:05:12.816982   14216 out.go:177] * Automatically selected the socket_vmnet network
	I0415 05:05:12.820042   14216 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:05:12.820086   14216 cni.go:84] Creating CNI manager for ""
	I0415 05:05:12.820093   14216 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:05:12.820097   14216 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 05:05:12.820130   14216 start.go:340] cluster config:
	{Name:test-preload-112000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-112000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:05:12.824524   14216 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:05:12.831909   14216 out.go:177] * Starting "test-preload-112000" primary control-plane node in "test-preload-112000" cluster
	I0415 05:05:12.834914   14216 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0415 05:05:12.834989   14216 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/test-preload-112000/config.json ...
	I0415 05:05:12.835016   14216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/test-preload-112000/config.json: {Name:mk24c69c2c5f10534d20a6462a8a3a7c646a456a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:05:12.835014   14216 cache.go:107] acquiring lock: {Name:mkbfdc158fe0fc75d1a0ae2103c24e756d4a4c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:05:12.835020   14216 cache.go:107] acquiring lock: {Name:mk061a0aa2e489d828dc159a60bb7d6551edbcf4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:05:12.835029   14216 cache.go:107] acquiring lock: {Name:mk5c74b0ce6049f8d3bd8bc61122954af475ea46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:05:12.835056   14216 cache.go:107] acquiring lock: {Name:mk87519e8daad80783febf5ba4f961f1c158953e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:05:12.835244   14216 cache.go:107] acquiring lock: {Name:mkc10596c869d3cb028c14c407d5431adfe9da61 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:05:12.835268   14216 cache.go:107] acquiring lock: {Name:mk00614194343bc3be38f01a8e036273885c329b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:05:12.835278   14216 cache.go:107] acquiring lock: {Name:mk5c409b3e4094b3e93bdd50e0b7381646f3b394 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:05:12.835386   14216 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 05:05:12.835376   14216 cache.go:107] acquiring lock: {Name:mk552e91f599f30dff26bdcab8f4c6b1d04cd6fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:05:12.835413   14216 start.go:360] acquireMachinesLock for test-preload-112000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:05:12.835440   14216 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0415 05:05:12.835454   14216 start.go:364] duration metric: took 34.708µs to acquireMachinesLock for "test-preload-112000"
	I0415 05:05:12.835465   14216 start.go:93] Provisioning new machine with config: &{Name:test-preload-112000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-112000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:05:12.835496   14216 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:05:12.835509   14216 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0415 05:05:12.839934   14216 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 05:05:12.835549   14216 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0415 05:05:12.835572   14216 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0415 05:05:12.835560   14216 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0415 05:05:12.835609   14216 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0415 05:05:12.835620   14216 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0415 05:05:12.848819   14216 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 05:05:12.849979   14216 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0415 05:05:12.850542   14216 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0415 05:05:12.853863   14216 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0415 05:05:12.853872   14216 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0415 05:05:12.853984   14216 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0415 05:05:12.854007   14216 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0415 05:05:12.854045   14216 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0415 05:05:12.856494   14216 start.go:159] libmachine.API.Create for "test-preload-112000" (driver="qemu2")
	I0415 05:05:12.856510   14216 client.go:168] LocalClient.Create starting
	I0415 05:05:12.856577   14216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:05:12.856607   14216 main.go:141] libmachine: Decoding PEM data...
	I0415 05:05:12.856615   14216 main.go:141] libmachine: Parsing certificate...
	I0415 05:05:12.856659   14216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:05:12.856681   14216 main.go:141] libmachine: Decoding PEM data...
	I0415 05:05:12.856688   14216 main.go:141] libmachine: Parsing certificate...
	I0415 05:05:12.857051   14216 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:05:12.988830   14216 main.go:141] libmachine: Creating SSH key...
	I0415 05:05:13.114188   14216 main.go:141] libmachine: Creating Disk image...
	I0415 05:05:13.114216   14216 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:05:13.114415   14216 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/test-preload-112000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/test-preload-112000/disk.qcow2
	I0415 05:05:13.127218   14216 main.go:141] libmachine: STDOUT: 
	I0415 05:05:13.127241   14216 main.go:141] libmachine: STDERR: 
	I0415 05:05:13.127318   14216 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/test-preload-112000/disk.qcow2 +20000M
	I0415 05:05:13.139648   14216 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:05:13.139673   14216 main.go:141] libmachine: STDERR: 
	I0415 05:05:13.139703   14216 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/test-preload-112000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/test-preload-112000/disk.qcow2
	I0415 05:05:13.139710   14216 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:05:13.139758   14216 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/test-preload-112000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/test-preload-112000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/test-preload-112000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:6e:6e:b2:cf:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/test-preload-112000/disk.qcow2
	I0415 05:05:13.141895   14216 main.go:141] libmachine: STDOUT: 
	I0415 05:05:13.141958   14216 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:05:13.141976   14216 client.go:171] duration metric: took 285.475542ms to LocalClient.Create
	W0415 05:05:13.306543   14216 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0415 05:05:13.306585   14216 cache.go:162] opening:  /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0415 05:05:13.306546   14216 cache.go:162] opening:  /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0415 05:05:13.312041   14216 cache.go:162] opening:  /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0415 05:05:13.324049   14216 cache.go:162] opening:  /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0415 05:05:13.343593   14216 cache.go:162] opening:  /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0415 05:05:13.362925   14216 cache.go:162] opening:  /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0415 05:05:13.365353   14216 cache.go:162] opening:  /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0415 05:05:13.405870   14216 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0415 05:05:13.405908   14216 cache.go:162] opening:  /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0415 05:05:13.511555   14216 cache.go:157] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0415 05:05:13.511608   14216 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 676.406125ms
	I0415 05:05:13.511645   14216 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0415 05:05:13.631286   14216 cache.go:157] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0415 05:05:13.631348   14216 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 796.348083ms
	I0415 05:05:13.631408   14216 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0415 05:05:15.142131   14216 start.go:128] duration metric: took 2.306707375s to createHost
	I0415 05:05:15.142181   14216 start.go:83] releasing machines lock for "test-preload-112000", held for 2.306813709s
	W0415 05:05:15.142289   14216 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:05:15.153852   14216 out.go:177] * Deleting "test-preload-112000" in qemu2 ...
	W0415 05:05:15.177112   14216 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:05:15.177184   14216 start.go:728] Will try again in 5 seconds ...
	I0415 05:05:15.226440   14216 cache.go:157] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0415 05:05:15.226495   14216 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.391533084s
	I0415 05:05:15.226530   14216 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0415 05:05:15.348484   14216 cache.go:157] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0415 05:05:15.348523   14216 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.513355417s
	I0415 05:05:15.348549   14216 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0415 05:05:16.850251   14216 cache.go:157] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0415 05:05:16.850318   14216 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.015457792s
	I0415 05:05:16.850353   14216 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0415 05:05:17.529317   14216 cache.go:157] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0415 05:05:17.529362   14216 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.69454325s
	I0415 05:05:17.529387   14216 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0415 05:05:19.696000   14216 cache.go:157] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0415 05:05:19.696054   14216 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.861048875s
	I0415 05:05:19.696081   14216 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0415 05:05:20.177408   14216 start.go:360] acquireMachinesLock for test-preload-112000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:05:20.177777   14216 start.go:364] duration metric: took 302.416µs to acquireMachinesLock for "test-preload-112000"
	I0415 05:05:20.177896   14216 start.go:93] Provisioning new machine with config: &{Name:test-preload-112000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-112000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:05:20.178151   14216 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:05:20.188777   14216 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 05:05:20.239012   14216 start.go:159] libmachine.API.Create for "test-preload-112000" (driver="qemu2")
	I0415 05:05:20.239214   14216 client.go:168] LocalClient.Create starting
	I0415 05:05:20.239325   14216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:05:20.239381   14216 main.go:141] libmachine: Decoding PEM data...
	I0415 05:05:20.239396   14216 main.go:141] libmachine: Parsing certificate...
	I0415 05:05:20.239462   14216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:05:20.239505   14216 main.go:141] libmachine: Decoding PEM data...
	I0415 05:05:20.239516   14216 main.go:141] libmachine: Parsing certificate...
	I0415 05:05:20.240051   14216 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:05:20.246559   14216 cache.go:157] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0415 05:05:20.246592   14216 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 7.411661625s
	I0415 05:05:20.246607   14216 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0415 05:05:20.246639   14216 cache.go:87] Successfully saved all images to host disk.
	I0415 05:05:20.372164   14216 main.go:141] libmachine: Creating SSH key...
	I0415 05:05:20.443872   14216 main.go:141] libmachine: Creating Disk image...
	I0415 05:05:20.443881   14216 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:05:20.444068   14216 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/test-preload-112000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/test-preload-112000/disk.qcow2
	I0415 05:05:20.456939   14216 main.go:141] libmachine: STDOUT: 
	I0415 05:05:20.456968   14216 main.go:141] libmachine: STDERR: 
	I0415 05:05:20.457032   14216 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/test-preload-112000/disk.qcow2 +20000M
	I0415 05:05:20.468207   14216 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:05:20.468228   14216 main.go:141] libmachine: STDERR: 
	I0415 05:05:20.468247   14216 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/test-preload-112000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/test-preload-112000/disk.qcow2
	I0415 05:05:20.468250   14216 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:05:20.468281   14216 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/test-preload-112000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/test-preload-112000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/test-preload-112000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:82:b3:5f:f9:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/test-preload-112000/disk.qcow2
	I0415 05:05:20.470093   14216 main.go:141] libmachine: STDOUT: 
	I0415 05:05:20.470123   14216 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:05:20.470143   14216 client.go:171] duration metric: took 230.933875ms to LocalClient.Create
	I0415 05:05:22.472367   14216 start.go:128] duration metric: took 2.294238791s to createHost
	I0415 05:05:22.472500   14216 start.go:83] releasing machines lock for "test-preload-112000", held for 2.294791125s
	W0415 05:05:22.472966   14216 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-112000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-112000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:05:22.485724   14216 out.go:177] 
	W0415 05:05:22.487339   14216 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:05:22.487367   14216 out.go:239] * 
	* 
	W0415 05:05:22.490303   14216 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:05:22.500570   14216 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-112000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-04-15 05:05:22.519491 -0700 PDT m=+725.494897042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-112000 -n test-preload-112000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-112000 -n test-preload-112000: exit status 7 (72.293333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-112000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-112000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-112000
--- FAIL: TestPreload (9.97s)

                                                
                                    
x
+
TestScheduledStopUnix (10.15s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-079000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-079000 --memory=2048 --driver=qemu2 : exit status 80 (9.974886083s)

                                                
                                                
-- stdout --
	* [scheduled-stop-079000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-079000" primary control-plane node in "scheduled-stop-079000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-079000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-079000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-079000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-079000" primary control-plane node in "scheduled-stop-079000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-079000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-079000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-04-15 05:05:32.667307 -0700 PDT m=+735.643135167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-079000 -n scheduled-stop-079000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-079000 -n scheduled-stop-079000: exit status 7 (69.467ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-079000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-079000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-079000
--- FAIL: TestScheduledStopUnix (10.15s)

                                                
                                    
x
+
TestSkaffold (12.32s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3489141569 version
skaffold_test.go:63: skaffold version: v2.11.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-076000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-076000 --memory=2600 --driver=qemu2 : exit status 80 (9.720269791s)

                                                
                                                
-- stdout --
	* [skaffold-076000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-076000" primary control-plane node in "skaffold-076000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-076000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-076000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-076000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-076000" primary control-plane node in "skaffold-076000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-076000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-076000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-04-15 05:05:44.990309 -0700 PDT m=+747.966650126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-076000 -n skaffold-076000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-076000 -n skaffold-076000: exit status 7 (64.992875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-076000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-076000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-076000
--- FAIL: TestSkaffold (12.32s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (595.66s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1889134001 start -p running-upgrade-921000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1889134001 start -p running-upgrade-921000 --memory=2200 --vm-driver=qemu2 : (51.145570292s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-921000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-921000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m30.872512917s)

                                                
                                                
-- stdout --
	* [running-upgrade-921000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-921000" primary control-plane node in "running-upgrade-921000" cluster
	* Updating the running qemu2 "running-upgrade-921000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:07:37.514997   14706 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:07:37.515138   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:07:37.515141   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:07:37.515144   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:07:37.515259   14706 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:07:37.516163   14706 out.go:298] Setting JSON to false
	I0415 05:07:37.533865   14706 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7628,"bootTime":1713175229,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:07:37.533945   14706 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:07:37.539070   14706 out.go:177] * [running-upgrade-921000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:07:37.547180   14706 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:07:37.547239   14706 notify.go:220] Checking for updates...
	I0415 05:07:37.552104   14706 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:07:37.555070   14706 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:07:37.559031   14706 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:07:37.562054   14706 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:07:37.565123   14706 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:07:37.568268   14706 config.go:182] Loaded profile config "running-upgrade-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:07:37.571055   14706 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0415 05:07:37.574066   14706 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:07:37.578022   14706 out.go:177] * Using the qemu2 driver based on existing profile
	I0415 05:07:37.585064   14706 start.go:297] selected driver: qemu2
	I0415 05:07:37.585071   14706 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52246 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0415 05:07:37.585123   14706 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:07:37.587869   14706 cni.go:84] Creating CNI manager for ""
	I0415 05:07:37.587885   14706 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:07:37.587906   14706 start.go:340] cluster config:
	{Name:running-upgrade-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52246 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0415 05:07:37.587964   14706 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:07:37.596115   14706 out.go:177] * Starting "running-upgrade-921000" primary control-plane node in "running-upgrade-921000" cluster
	I0415 05:07:37.600052   14706 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0415 05:07:37.600069   14706 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0415 05:07:37.600078   14706 cache.go:56] Caching tarball of preloaded images
	I0415 05:07:37.600139   14706 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:07:37.600144   14706 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0415 05:07:37.600204   14706 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/config.json ...
	I0415 05:07:37.600641   14706 start.go:360] acquireMachinesLock for running-upgrade-921000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:07:37.600673   14706 start.go:364] duration metric: took 27µs to acquireMachinesLock for "running-upgrade-921000"
	I0415 05:07:37.600681   14706 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:07:37.600686   14706 fix.go:54] fixHost starting: 
	I0415 05:07:37.601318   14706 fix.go:112] recreateIfNeeded on running-upgrade-921000: state=Running err=<nil>
	W0415 05:07:37.601326   14706 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:07:37.606088   14706 out.go:177] * Updating the running qemu2 "running-upgrade-921000" VM ...
	I0415 05:07:37.612987   14706 machine.go:94] provisionDockerMachine start ...
	I0415 05:07:37.613018   14706 main.go:141] libmachine: Using SSH client type: native
	I0415 05:07:37.613144   14706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e75c80] 0x100e784e0 <nil>  [] 0s} localhost 52214 <nil> <nil>}
	I0415 05:07:37.613149   14706 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 05:07:37.686951   14706 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-921000
	
	I0415 05:07:37.686967   14706 buildroot.go:166] provisioning hostname "running-upgrade-921000"
	I0415 05:07:37.687010   14706 main.go:141] libmachine: Using SSH client type: native
	I0415 05:07:37.687122   14706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e75c80] 0x100e784e0 <nil>  [] 0s} localhost 52214 <nil> <nil>}
	I0415 05:07:37.687130   14706 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-921000 && echo "running-upgrade-921000" | sudo tee /etc/hostname
	I0415 05:07:37.764375   14706 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-921000
	
	I0415 05:07:37.764424   14706 main.go:141] libmachine: Using SSH client type: native
	I0415 05:07:37.764522   14706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e75c80] 0x100e784e0 <nil>  [] 0s} localhost 52214 <nil> <nil>}
	I0415 05:07:37.764530   14706 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-921000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-921000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-921000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 05:07:37.837269   14706 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 05:07:37.837282   14706 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18644-12307/.minikube CaCertPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18644-12307/.minikube}
	I0415 05:07:37.837294   14706 buildroot.go:174] setting up certificates
	I0415 05:07:37.837299   14706 provision.go:84] configureAuth start
	I0415 05:07:37.837303   14706 provision.go:143] copyHostCerts
	I0415 05:07:37.837403   14706 exec_runner.go:144] found /Users/jenkins/minikube-integration/18644-12307/.minikube/ca.pem, removing ...
	I0415 05:07:37.837408   14706 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18644-12307/.minikube/ca.pem
	I0415 05:07:37.837528   14706 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18644-12307/.minikube/ca.pem (1078 bytes)
	I0415 05:07:37.837694   14706 exec_runner.go:144] found /Users/jenkins/minikube-integration/18644-12307/.minikube/cert.pem, removing ...
	I0415 05:07:37.837697   14706 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18644-12307/.minikube/cert.pem
	I0415 05:07:37.837755   14706 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18644-12307/.minikube/cert.pem (1123 bytes)
	I0415 05:07:37.837860   14706 exec_runner.go:144] found /Users/jenkins/minikube-integration/18644-12307/.minikube/key.pem, removing ...
	I0415 05:07:37.837863   14706 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18644-12307/.minikube/key.pem
	I0415 05:07:37.837912   14706 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18644-12307/.minikube/key.pem (1675 bytes)
	I0415 05:07:37.838009   14706 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-921000 san=[127.0.0.1 localhost minikube running-upgrade-921000]
	I0415 05:07:37.897186   14706 provision.go:177] copyRemoteCerts
	I0415 05:07:37.897231   14706 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 05:07:37.897239   14706 sshutil.go:53] new ssh client: &{IP:localhost Port:52214 SSHKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/running-upgrade-921000/id_rsa Username:docker}
	I0415 05:07:37.935774   14706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0415 05:07:37.942911   14706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0415 05:07:37.950211   14706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 05:07:37.957061   14706 provision.go:87] duration metric: took 119.762ms to configureAuth
	I0415 05:07:37.957071   14706 buildroot.go:189] setting minikube options for container-runtime
	I0415 05:07:37.957198   14706 config.go:182] Loaded profile config "running-upgrade-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:07:37.957230   14706 main.go:141] libmachine: Using SSH client type: native
	I0415 05:07:37.957329   14706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e75c80] 0x100e784e0 <nil>  [] 0s} localhost 52214 <nil> <nil>}
	I0415 05:07:37.957334   14706 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 05:07:38.032918   14706 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 05:07:38.032929   14706 buildroot.go:70] root file system type: tmpfs
	I0415 05:07:38.032981   14706 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 05:07:38.033038   14706 main.go:141] libmachine: Using SSH client type: native
	I0415 05:07:38.033174   14706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e75c80] 0x100e784e0 <nil>  [] 0s} localhost 52214 <nil> <nil>}
	I0415 05:07:38.033209   14706 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 05:07:38.112392   14706 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 05:07:38.112447   14706 main.go:141] libmachine: Using SSH client type: native
	I0415 05:07:38.112566   14706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e75c80] 0x100e784e0 <nil>  [] 0s} localhost 52214 <nil> <nil>}
	I0415 05:07:38.112575   14706 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 05:07:38.189968   14706 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 05:07:38.189984   14706 machine.go:97] duration metric: took 577.014875ms to provisionDockerMachine
	I0415 05:07:38.189990   14706 start.go:293] postStartSetup for "running-upgrade-921000" (driver="qemu2")
	I0415 05:07:38.189996   14706 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 05:07:38.190049   14706 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 05:07:38.190058   14706 sshutil.go:53] new ssh client: &{IP:localhost Port:52214 SSHKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/running-upgrade-921000/id_rsa Username:docker}
	I0415 05:07:38.231651   14706 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 05:07:38.232960   14706 info.go:137] Remote host: Buildroot 2021.02.12
	I0415 05:07:38.232967   14706 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18644-12307/.minikube/addons for local assets ...
	I0415 05:07:38.233038   14706 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18644-12307/.minikube/files for local assets ...
	I0415 05:07:38.233148   14706 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18644-12307/.minikube/files/etc/ssl/certs/127152.pem -> 127152.pem in /etc/ssl/certs
	I0415 05:07:38.233274   14706 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 05:07:38.236149   14706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/files/etc/ssl/certs/127152.pem --> /etc/ssl/certs/127152.pem (1708 bytes)
	I0415 05:07:38.242722   14706 start.go:296] duration metric: took 52.730375ms for postStartSetup
	I0415 05:07:38.242739   14706 fix.go:56] duration metric: took 642.079958ms for fixHost
	I0415 05:07:38.242769   14706 main.go:141] libmachine: Using SSH client type: native
	I0415 05:07:38.242864   14706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e75c80] 0x100e784e0 <nil>  [] 0s} localhost 52214 <nil> <nil>}
	I0415 05:07:38.242868   14706 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0415 05:07:38.318012   14706 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713182858.410451097
	
	I0415 05:07:38.318025   14706 fix.go:216] guest clock: 1713182858.410451097
	I0415 05:07:38.318029   14706 fix.go:229] Guest: 2024-04-15 05:07:38.410451097 -0700 PDT Remote: 2024-04-15 05:07:38.242741 -0700 PDT m=+0.750259626 (delta=167.710097ms)
	I0415 05:07:38.318041   14706 fix.go:200] guest clock delta is within tolerance: 167.710097ms
	I0415 05:07:38.318044   14706 start.go:83] releasing machines lock for "running-upgrade-921000", held for 717.396042ms
	I0415 05:07:38.318115   14706 ssh_runner.go:195] Run: cat /version.json
	I0415 05:07:38.318128   14706 sshutil.go:53] new ssh client: &{IP:localhost Port:52214 SSHKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/running-upgrade-921000/id_rsa Username:docker}
	I0415 05:07:38.318116   14706 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 05:07:38.318153   14706 sshutil.go:53] new ssh client: &{IP:localhost Port:52214 SSHKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/running-upgrade-921000/id_rsa Username:docker}
	W0415 05:07:38.318698   14706 sshutil.go:64] dial failure (will retry): dial tcp [::1]:52214: connect: connection refused
	I0415 05:07:38.318717   14706 retry.go:31] will retry after 342.3681ms: dial tcp [::1]:52214: connect: connection refused
	W0415 05:07:38.355214   14706 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0415 05:07:38.355257   14706 ssh_runner.go:195] Run: systemctl --version
	I0415 05:07:38.357137   14706 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 05:07:38.358884   14706 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 05:07:38.358910   14706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0415 05:07:38.361673   14706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0415 05:07:38.366335   14706 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 05:07:38.366340   14706 start.go:494] detecting cgroup driver to use...
	I0415 05:07:38.366450   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 05:07:38.371714   14706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0415 05:07:38.374695   14706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 05:07:38.377629   14706 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 05:07:38.377657   14706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 05:07:38.381276   14706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 05:07:38.384051   14706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 05:07:38.386733   14706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 05:07:38.389990   14706 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 05:07:38.393334   14706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 05:07:38.396553   14706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 05:07:38.399391   14706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 05:07:38.402311   14706 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 05:07:38.405274   14706 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 05:07:38.407764   14706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 05:07:38.498589   14706 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 05:07:38.505267   14706 start.go:494] detecting cgroup driver to use...
	I0415 05:07:38.505329   14706 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 05:07:38.513240   14706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 05:07:38.518174   14706 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 05:07:38.529010   14706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 05:07:38.533641   14706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 05:07:38.538420   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 05:07:38.543951   14706 ssh_runner.go:195] Run: which cri-dockerd
	I0415 05:07:38.545337   14706 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 05:07:38.547864   14706 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 05:07:38.552604   14706 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 05:07:38.645058   14706 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 05:07:38.738021   14706 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 05:07:38.738081   14706 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 05:07:38.743638   14706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 05:07:38.827543   14706 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 05:07:43.188485   14706 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.3611085s)
	I0415 05:07:43.188742   14706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0415 05:07:43.193555   14706 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0415 05:07:43.199895   14706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 05:07:43.204356   14706 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0415 05:07:43.287920   14706 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0415 05:07:43.358739   14706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 05:07:43.422050   14706 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0415 05:07:43.428811   14706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 05:07:43.433133   14706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 05:07:43.499301   14706 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0415 05:07:43.537912   14706 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0415 05:07:43.537981   14706 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0415 05:07:43.541054   14706 start.go:562] Will wait 60s for crictl version
	I0415 05:07:43.541110   14706 ssh_runner.go:195] Run: which crictl
	I0415 05:07:43.542401   14706 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 05:07:43.553943   14706 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0415 05:07:43.554011   14706 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 05:07:43.566428   14706 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 05:07:43.587719   14706 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0415 05:07:43.587789   14706 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0415 05:07:43.589142   14706 kubeadm.go:877] updating cluster {Name:running-upgrade-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52246 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0415 05:07:43.589183   14706 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0415 05:07:43.589221   14706 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 05:07:43.599775   14706 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 05:07:43.599790   14706 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0415 05:07:43.599836   14706 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 05:07:43.602745   14706 ssh_runner.go:195] Run: which lz4
	I0415 05:07:43.604046   14706 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0415 05:07:43.605281   14706 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0415 05:07:43.605292   14706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0415 05:07:44.293694   14706 docker.go:649] duration metric: took 689.709458ms to copy over tarball
	I0415 05:07:44.293755   14706 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0415 05:07:45.390293   14706 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.096566125s)
	I0415 05:07:45.390307   14706 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0415 05:07:45.405997   14706 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 05:07:45.409611   14706 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0415 05:07:45.414628   14706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 05:07:45.487989   14706 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 05:07:46.647409   14706 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.159451s)
	I0415 05:07:46.647495   14706 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 05:07:46.658401   14706 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 05:07:46.658409   14706 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0415 05:07:46.658414   14706 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0415 05:07:46.666676   14706 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0415 05:07:46.666742   14706 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0415 05:07:46.666853   14706 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0415 05:07:46.666924   14706 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0415 05:07:46.667053   14706 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0415 05:07:46.667165   14706 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 05:07:46.667250   14706 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0415 05:07:46.667497   14706 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0415 05:07:46.675633   14706 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0415 05:07:46.675689   14706 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0415 05:07:46.675747   14706 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0415 05:07:46.675800   14706 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 05:07:46.675854   14706 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0415 05:07:46.675864   14706 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0415 05:07:46.676117   14706 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0415 05:07:46.676210   14706 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0415 05:07:47.054788   14706 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0415 05:07:47.066602   14706 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0415 05:07:47.067848   14706 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0415 05:07:47.067868   14706 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0415 05:07:47.067898   14706 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0415 05:07:47.070480   14706 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0415 05:07:47.079530   14706 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0415 05:07:47.079558   14706 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0415 05:07:47.079616   14706 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0415 05:07:47.087486   14706 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0415 05:07:47.087527   14706 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0415 05:07:47.087543   14706 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0415 05:07:47.087585   14706 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0415 05:07:47.087593   14706 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0415 05:07:47.093058   14706 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0415 05:07:47.095641   14706 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0415 05:07:47.095648   14706 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0415 05:07:47.095662   14706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0415 05:07:47.100964   14706 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0415 05:07:47.108627   14706 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0415 05:07:47.108648   14706 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0415 05:07:47.108696   14706 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	W0415 05:07:47.110199   14706 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0415 05:07:47.110303   14706 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0415 05:07:47.113060   14706 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0415 05:07:47.113068   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0415 05:07:47.129331   14706 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0415 05:07:47.129359   14706 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0415 05:07:47.129374   14706 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0415 05:07:47.129414   14706 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0415 05:07:47.163039   14706 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0415 05:07:47.163076   14706 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0415 05:07:47.163188   14706 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0415 05:07:47.163451   14706 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0415 05:07:47.164780   14706 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0415 05:07:47.164793   14706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0415 05:07:47.168384   14706 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0415 05:07:47.182598   14706 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0415 05:07:47.182622   14706 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0415 05:07:47.182677   14706 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0415 05:07:47.188670   14706 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0415 05:07:47.188691   14706 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0415 05:07:47.188758   14706 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0415 05:07:47.208396   14706 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0415 05:07:47.220181   14706 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0415 05:07:47.220303   14706 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0415 05:07:47.227220   14706 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0415 05:07:47.227241   14706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0415 05:07:47.230232   14706 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0415 05:07:47.230241   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0415 05:07:47.334878   14706 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0415 05:07:47.378823   14706 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0415 05:07:47.378948   14706 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 05:07:47.407312   14706 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0415 05:07:47.407335   14706 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 05:07:47.407390   14706 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 05:07:47.493660   14706 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0415 05:07:47.493681   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0415 05:07:47.565755   14706 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0415 05:07:47.565888   14706 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0415 05:07:47.649144   14706 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0415 05:07:47.649159   14706 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0415 05:07:47.649191   14706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0415 05:07:47.690079   14706 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0415 05:07:47.690094   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0415 05:07:47.965100   14706 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0415 05:07:47.965143   14706 cache_images.go:92] duration metric: took 1.306776084s to LoadCachedImages
	W0415 05:07:47.965191   14706 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0415 05:07:47.965197   14706 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0415 05:07:47.965258   14706 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-921000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 05:07:47.965324   14706 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0415 05:07:47.980601   14706 cni.go:84] Creating CNI manager for ""
	I0415 05:07:47.980613   14706 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:07:47.980618   14706 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 05:07:47.980629   14706 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-921000 NodeName:running-upgrade-921000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 05:07:47.980692   14706 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-921000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 05:07:47.980754   14706 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0415 05:07:47.984305   14706 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 05:07:47.984336   14706 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0415 05:07:47.986992   14706 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0415 05:07:47.991747   14706 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 05:07:47.996537   14706 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0415 05:07:48.001988   14706 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0415 05:07:48.003421   14706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 05:07:48.067847   14706 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 05:07:48.072633   14706 certs.go:68] Setting up /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000 for IP: 10.0.2.15
	I0415 05:07:48.072641   14706 certs.go:194] generating shared ca certs ...
	I0415 05:07:48.072650   14706 certs.go:226] acquiring lock for ca certs: {Name:mka361b510efa9f3b096790ec7d465ec93fc1994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:07:48.072890   14706 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/ca.key
	I0415 05:07:48.072941   14706 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/proxy-client-ca.key
	I0415 05:07:48.072949   14706 certs.go:256] generating profile certs ...
	I0415 05:07:48.073033   14706 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/client.key
	I0415 05:07:48.073044   14706 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/apiserver.key.bba591e0
	I0415 05:07:48.073056   14706 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/apiserver.crt.bba591e0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0415 05:07:48.212816   14706 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/apiserver.crt.bba591e0 ...
	I0415 05:07:48.212832   14706 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/apiserver.crt.bba591e0: {Name:mk4195cd57537384eb6d5fc3bb07f2fa94cee6a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:07:48.213127   14706 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/apiserver.key.bba591e0 ...
	I0415 05:07:48.213132   14706 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/apiserver.key.bba591e0: {Name:mk7bfbdaed5109371455cbcf0131af256cff5b9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:07:48.213275   14706 certs.go:381] copying /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/apiserver.crt.bba591e0 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/apiserver.crt
	I0415 05:07:48.213407   14706 certs.go:385] copying /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/apiserver.key.bba591e0 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/apiserver.key
	I0415 05:07:48.213563   14706 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/proxy-client.key
	I0415 05:07:48.213721   14706 certs.go:484] found cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/12715.pem (1338 bytes)
	W0415 05:07:48.213753   14706 certs.go:480] ignoring /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/12715_empty.pem, impossibly tiny 0 bytes
	I0415 05:07:48.213759   14706 certs.go:484] found cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca-key.pem (1675 bytes)
	I0415 05:07:48.213789   14706 certs.go:484] found cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem (1078 bytes)
	I0415 05:07:48.213814   14706 certs.go:484] found cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem (1123 bytes)
	I0415 05:07:48.213843   14706 certs.go:484] found cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/key.pem (1675 bytes)
	I0415 05:07:48.213891   14706 certs.go:484] found cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/files/etc/ssl/certs/127152.pem (1708 bytes)
	I0415 05:07:48.214222   14706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 05:07:48.221715   14706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 05:07:48.229042   14706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 05:07:48.236279   14706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 05:07:48.243213   14706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0415 05:07:48.249936   14706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0415 05:07:48.257445   14706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 05:07:48.265136   14706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0415 05:07:48.272543   14706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 05:07:48.279203   14706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/12715.pem --> /usr/share/ca-certificates/12715.pem (1338 bytes)
	I0415 05:07:48.285874   14706 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/files/etc/ssl/certs/127152.pem --> /usr/share/ca-certificates/127152.pem (1708 bytes)
	I0415 05:07:48.293180   14706 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 05:07:48.298258   14706 ssh_runner.go:195] Run: openssl version
	I0415 05:07:48.300245   14706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 05:07:48.303451   14706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 05:07:48.304955   14706 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 12:07 /usr/share/ca-certificates/minikubeCA.pem
	I0415 05:07:48.304978   14706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 05:07:48.306930   14706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 05:07:48.309841   14706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12715.pem && ln -fs /usr/share/ca-certificates/12715.pem /etc/ssl/certs/12715.pem"
	I0415 05:07:48.313361   14706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12715.pem
	I0415 05:07:48.314867   14706 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 11:54 /usr/share/ca-certificates/12715.pem
	I0415 05:07:48.314890   14706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12715.pem
	I0415 05:07:48.316663   14706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12715.pem /etc/ssl/certs/51391683.0"
	I0415 05:07:48.319290   14706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/127152.pem && ln -fs /usr/share/ca-certificates/127152.pem /etc/ssl/certs/127152.pem"
	I0415 05:07:48.322250   14706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127152.pem
	I0415 05:07:48.323719   14706 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 11:54 /usr/share/ca-certificates/127152.pem
	I0415 05:07:48.323737   14706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127152.pem
	I0415 05:07:48.325510   14706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/127152.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 05:07:48.328492   14706 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 05:07:48.329915   14706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0415 05:07:48.331812   14706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0415 05:07:48.333529   14706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0415 05:07:48.335553   14706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0415 05:07:48.337639   14706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0415 05:07:48.339647   14706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0415 05:07:48.341401   14706 kubeadm.go:391] StartCluster: {Name:running-upgrade-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52246 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0415 05:07:48.341466   14706 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 05:07:48.353494   14706 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0415 05:07:48.358233   14706 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0415 05:07:48.358238   14706 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0415 05:07:48.358241   14706 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0415 05:07:48.358260   14706 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0415 05:07:48.361655   14706 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0415 05:07:48.361691   14706 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-921000" does not appear in /Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:07:48.361715   14706 kubeconfig.go:62] /Users/jenkins/minikube-integration/18644-12307/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-921000" cluster setting kubeconfig missing "running-upgrade-921000" context setting]
	I0415 05:07:48.361886   14706 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/kubeconfig: {Name:mk14f6faef80a57910c6a54d13a440297d06327c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:07:48.362558   14706 kapi.go:59] client config for running-upgrade-921000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/client.key", CAFile:"/Users/jenkins/minikube-integration/18644-12307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10216bc80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 05:07:48.363369   14706 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0415 05:07:48.366082   14706 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-921000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0415 05:07:48.366087   14706 kubeadm.go:1154] stopping kube-system containers ...
	I0415 05:07:48.366124   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 05:07:48.377681   14706 docker.go:483] Stopping containers: [01884ef518c9 531f4200da55 42c62a635654 c723a3bfbd35 6ff890f8afe9 825d2d15ca8d ca69570f88f3 2231e4ededcd dcbe80bb310e 694f8dbe197e 9dc55fac76b9 8197275657ce f32f317db68f 144d4052b562 ef374bd1ac72]
	I0415 05:07:48.377743   14706 ssh_runner.go:195] Run: docker stop 01884ef518c9 531f4200da55 42c62a635654 c723a3bfbd35 6ff890f8afe9 825d2d15ca8d ca69570f88f3 2231e4ededcd dcbe80bb310e 694f8dbe197e 9dc55fac76b9 8197275657ce f32f317db68f 144d4052b562 ef374bd1ac72
	I0415 05:07:49.424960   14706 ssh_runner.go:235] Completed: docker stop 01884ef518c9 531f4200da55 42c62a635654 c723a3bfbd35 6ff890f8afe9 825d2d15ca8d ca69570f88f3 2231e4ededcd dcbe80bb310e 694f8dbe197e 9dc55fac76b9 8197275657ce f32f317db68f 144d4052b562 ef374bd1ac72: (1.047242792s)
	I0415 05:07:49.425046   14706 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0415 05:07:49.514751   14706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 05:07:49.518187   14706 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 Apr 15 12:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Apr 15 12:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Apr 15 12:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Apr 15 12:07 /etc/kubernetes/scheduler.conf
	
	I0415 05:07:49.518224   14706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/admin.conf
	I0415 05:07:49.521082   14706 kubeadm.go:162] "https://control-plane.minikube.internal:52246" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0415 05:07:49.521107   14706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 05:07:49.523835   14706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/kubelet.conf
	I0415 05:07:49.526548   14706 kubeadm.go:162] "https://control-plane.minikube.internal:52246" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0415 05:07:49.526572   14706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 05:07:49.529591   14706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/controller-manager.conf
	I0415 05:07:49.532502   14706 kubeadm.go:162] "https://control-plane.minikube.internal:52246" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0415 05:07:49.532553   14706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 05:07:49.536298   14706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/scheduler.conf
	I0415 05:07:49.540100   14706 kubeadm.go:162] "https://control-plane.minikube.internal:52246" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0415 05:07:49.540141   14706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 05:07:49.542894   14706 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 05:07:49.545671   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0415 05:07:49.566370   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0415 05:07:50.045895   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0415 05:07:50.239969   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0415 05:07:50.263050   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0415 05:07:50.283520   14706 api_server.go:52] waiting for apiserver process to appear ...
	I0415 05:07:50.283604   14706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 05:07:50.785902   14706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 05:07:51.285695   14706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 05:07:51.292924   14706 api_server.go:72] duration metric: took 1.009445709s to wait for apiserver process to appear ...
	I0415 05:07:51.292939   14706 api_server.go:88] waiting for apiserver healthz status ...
	I0415 05:07:51.292956   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:07:56.294940   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:07:56.295069   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:08:01.295758   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:08:01.295915   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:08:06.296695   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:08:06.296743   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:08:11.297606   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:08:11.297706   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:08:16.299061   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:08:16.299200   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:08:21.301160   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:08:21.301251   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:08:26.303683   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:08:26.303760   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:08:31.306117   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:08:31.306181   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:08:36.308617   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:08:36.308691   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:08:41.310969   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:08:41.311041   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:08:46.313385   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:08:46.313452   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:08:51.315883   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:08:51.316327   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:08:51.356147   14706 logs.go:276] 2 containers: [34bb719097d2 01884ef518c9]
	I0415 05:08:51.356295   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:08:51.376536   14706 logs.go:276] 2 containers: [def9914ead4d 6ff890f8afe9]
	I0415 05:08:51.376617   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:08:51.395602   14706 logs.go:276] 1 containers: [04ae38b68739]
	I0415 05:08:51.395680   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:08:51.406986   14706 logs.go:276] 2 containers: [24b9766e04bb ca69570f88f3]
	I0415 05:08:51.407057   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:08:51.417378   14706 logs.go:276] 1 containers: [ee30cb75f716]
	I0415 05:08:51.417454   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:08:51.427539   14706 logs.go:276] 2 containers: [44cd3ac0aba4 2231e4ededcd]
	I0415 05:08:51.427608   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:08:51.437360   14706 logs.go:276] 0 containers: []
	W0415 05:08:51.437378   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:08:51.437442   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:08:51.452393   14706 logs.go:276] 1 containers: [4a3f45afbdb1]
	I0415 05:08:51.452410   14706 logs.go:123] Gathering logs for kube-apiserver [01884ef518c9] ...
	I0415 05:08:51.452416   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01884ef518c9"
	I0415 05:08:51.464383   14706 logs.go:123] Gathering logs for kube-scheduler [24b9766e04bb] ...
	I0415 05:08:51.464394   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b9766e04bb"
	I0415 05:08:51.476109   14706 logs.go:123] Gathering logs for kube-proxy [ee30cb75f716] ...
	I0415 05:08:51.476119   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee30cb75f716"
	I0415 05:08:51.488068   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:08:51.488081   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:08:51.523755   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:08:51.523848   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:08:51.524316   14706 logs.go:123] Gathering logs for coredns [04ae38b68739] ...
	I0415 05:08:51.524322   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04ae38b68739"
	I0415 05:08:51.535667   14706 logs.go:123] Gathering logs for kube-controller-manager [44cd3ac0aba4] ...
	I0415 05:08:51.535683   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44cd3ac0aba4"
	I0415 05:08:51.552097   14706 logs.go:123] Gathering logs for kube-controller-manager [2231e4ededcd] ...
	I0415 05:08:51.552107   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2231e4ededcd"
	I0415 05:08:51.563275   14706 logs.go:123] Gathering logs for storage-provisioner [4a3f45afbdb1] ...
	I0415 05:08:51.563287   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a3f45afbdb1"
	I0415 05:08:51.574614   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:08:51.574624   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:08:51.647085   14706 logs.go:123] Gathering logs for kube-apiserver [34bb719097d2] ...
	I0415 05:08:51.647098   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34bb719097d2"
	I0415 05:08:51.661707   14706 logs.go:123] Gathering logs for kube-scheduler [ca69570f88f3] ...
	I0415 05:08:51.661717   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca69570f88f3"
	I0415 05:08:51.676928   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:08:51.676938   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:08:51.701728   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:08:51.701737   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:08:51.713690   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:08:51.713701   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:08:51.718224   14706 logs.go:123] Gathering logs for etcd [def9914ead4d] ...
	I0415 05:08:51.718230   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def9914ead4d"
	I0415 05:08:51.732022   14706 logs.go:123] Gathering logs for etcd [6ff890f8afe9] ...
	I0415 05:08:51.732033   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff890f8afe9"
	I0415 05:08:51.749930   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:08:51.749941   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:08:51.749967   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:08:51.749972   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:08:51.749988   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:08:51.749995   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:08:51.749998   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:09:01.753852   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:09:06.756270   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:09:06.756617   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:09:06.795212   14706 logs.go:276] 2 containers: [34bb719097d2 01884ef518c9]
	I0415 05:09:06.795331   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:09:06.811725   14706 logs.go:276] 2 containers: [def9914ead4d 6ff890f8afe9]
	I0415 05:09:06.811805   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:09:06.824876   14706 logs.go:276] 1 containers: [04ae38b68739]
	I0415 05:09:06.824945   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:09:06.836590   14706 logs.go:276] 2 containers: [24b9766e04bb ca69570f88f3]
	I0415 05:09:06.836650   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:09:06.847303   14706 logs.go:276] 1 containers: [ee30cb75f716]
	I0415 05:09:06.847369   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:09:06.863137   14706 logs.go:276] 2 containers: [44cd3ac0aba4 2231e4ededcd]
	I0415 05:09:06.863201   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:09:06.873678   14706 logs.go:276] 0 containers: []
	W0415 05:09:06.873698   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:09:06.873763   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:09:06.884339   14706 logs.go:276] 1 containers: [4a3f45afbdb1]
	I0415 05:09:06.884357   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:09:06.884362   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:09:06.920593   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:09:06.920604   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:09:06.925370   14706 logs.go:123] Gathering logs for kube-apiserver [34bb719097d2] ...
	I0415 05:09:06.925379   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34bb719097d2"
	I0415 05:09:06.939371   14706 logs.go:123] Gathering logs for etcd [def9914ead4d] ...
	I0415 05:09:06.939380   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def9914ead4d"
	I0415 05:09:06.956598   14706 logs.go:123] Gathering logs for coredns [04ae38b68739] ...
	I0415 05:09:06.956607   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04ae38b68739"
	I0415 05:09:06.972957   14706 logs.go:123] Gathering logs for kube-scheduler [24b9766e04bb] ...
	I0415 05:09:06.972967   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b9766e04bb"
	I0415 05:09:06.984205   14706 logs.go:123] Gathering logs for kube-controller-manager [2231e4ededcd] ...
	I0415 05:09:06.984216   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2231e4ededcd"
	I0415 05:09:06.995609   14706 logs.go:123] Gathering logs for storage-provisioner [4a3f45afbdb1] ...
	I0415 05:09:06.995620   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a3f45afbdb1"
	I0415 05:09:07.007061   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:09:07.007072   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:09:07.043607   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:09:07.043699   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:09:07.044157   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:09:07.044162   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:09:07.055737   14706 logs.go:123] Gathering logs for kube-scheduler [ca69570f88f3] ...
	I0415 05:09:07.055748   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca69570f88f3"
	I0415 05:09:07.070459   14706 logs.go:123] Gathering logs for kube-controller-manager [44cd3ac0aba4] ...
	I0415 05:09:07.070471   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44cd3ac0aba4"
	I0415 05:09:07.088461   14706 logs.go:123] Gathering logs for kube-apiserver [01884ef518c9] ...
	I0415 05:09:07.088472   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01884ef518c9"
	I0415 05:09:07.100902   14706 logs.go:123] Gathering logs for kube-proxy [ee30cb75f716] ...
	I0415 05:09:07.100917   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee30cb75f716"
	I0415 05:09:07.112643   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:09:07.112657   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:09:07.140733   14706 logs.go:123] Gathering logs for etcd [6ff890f8afe9] ...
	I0415 05:09:07.140742   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff890f8afe9"
	I0415 05:09:07.157523   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:09:07.157535   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:09:07.157560   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:09:07.157567   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:09:07.157573   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:09:07.157578   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:09:07.157581   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:09:17.159474   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:09:22.162154   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:09:22.162619   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:09:22.204910   14706 logs.go:276] 2 containers: [34bb719097d2 01884ef518c9]
	I0415 05:09:22.205048   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:09:22.225281   14706 logs.go:276] 2 containers: [def9914ead4d 6ff890f8afe9]
	I0415 05:09:22.225397   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:09:22.240636   14706 logs.go:276] 1 containers: [04ae38b68739]
	I0415 05:09:22.240727   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:09:22.252989   14706 logs.go:276] 2 containers: [24b9766e04bb ca69570f88f3]
	I0415 05:09:22.253059   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:09:22.267389   14706 logs.go:276] 1 containers: [ee30cb75f716]
	I0415 05:09:22.267471   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:09:22.279483   14706 logs.go:276] 2 containers: [44cd3ac0aba4 2231e4ededcd]
	I0415 05:09:22.279553   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:09:22.295349   14706 logs.go:276] 0 containers: []
	W0415 05:09:22.295362   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:09:22.295421   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:09:22.309890   14706 logs.go:276] 1 containers: [4a3f45afbdb1]
	I0415 05:09:22.309907   14706 logs.go:123] Gathering logs for kube-apiserver [01884ef518c9] ...
	I0415 05:09:22.309912   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01884ef518c9"
	I0415 05:09:22.321956   14706 logs.go:123] Gathering logs for kube-proxy [ee30cb75f716] ...
	I0415 05:09:22.321965   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee30cb75f716"
	I0415 05:09:22.334249   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:09:22.334260   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:09:22.338784   14706 logs.go:123] Gathering logs for etcd [def9914ead4d] ...
	I0415 05:09:22.338794   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def9914ead4d"
	I0415 05:09:22.352817   14706 logs.go:123] Gathering logs for etcd [6ff890f8afe9] ...
	I0415 05:09:22.352830   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff890f8afe9"
	I0415 05:09:22.370668   14706 logs.go:123] Gathering logs for kube-controller-manager [44cd3ac0aba4] ...
	I0415 05:09:22.370677   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44cd3ac0aba4"
	I0415 05:09:22.388568   14706 logs.go:123] Gathering logs for kube-controller-manager [2231e4ededcd] ...
	I0415 05:09:22.388582   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2231e4ededcd"
	I0415 05:09:22.400439   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:09:22.400452   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:09:22.426419   14706 logs.go:123] Gathering logs for kube-apiserver [34bb719097d2] ...
	I0415 05:09:22.426429   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34bb719097d2"
	I0415 05:09:22.442978   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:09:22.442989   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:09:22.477351   14706 logs.go:123] Gathering logs for coredns [04ae38b68739] ...
	I0415 05:09:22.477361   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04ae38b68739"
	I0415 05:09:22.488818   14706 logs.go:123] Gathering logs for kube-scheduler [24b9766e04bb] ...
	I0415 05:09:22.488829   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b9766e04bb"
	I0415 05:09:22.500132   14706 logs.go:123] Gathering logs for kube-scheduler [ca69570f88f3] ...
	I0415 05:09:22.500145   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca69570f88f3"
	I0415 05:09:22.515182   14706 logs.go:123] Gathering logs for storage-provisioner [4a3f45afbdb1] ...
	I0415 05:09:22.515374   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a3f45afbdb1"
	I0415 05:09:22.527050   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:09:22.527063   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:09:22.540936   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:09:22.540948   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:09:22.579094   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:09:22.579186   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:09:22.579643   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:09:22.579648   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:09:22.579675   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:09:22.579679   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:09:22.579692   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:09:22.579695   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:09:22.579698   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:09:32.583424   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:09:37.585523   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:09:37.585712   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:09:37.599377   14706 logs.go:276] 2 containers: [34bb719097d2 01884ef518c9]
	I0415 05:09:37.599448   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:09:37.610426   14706 logs.go:276] 2 containers: [def9914ead4d 6ff890f8afe9]
	I0415 05:09:37.610509   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:09:37.627701   14706 logs.go:276] 1 containers: [04ae38b68739]
	I0415 05:09:37.627777   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:09:37.638327   14706 logs.go:276] 2 containers: [24b9766e04bb ca69570f88f3]
	I0415 05:09:37.638391   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:09:37.648909   14706 logs.go:276] 1 containers: [ee30cb75f716]
	I0415 05:09:37.648977   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:09:37.659816   14706 logs.go:276] 2 containers: [44cd3ac0aba4 2231e4ededcd]
	I0415 05:09:37.659880   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:09:37.669601   14706 logs.go:276] 0 containers: []
	W0415 05:09:37.669612   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:09:37.669676   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:09:37.679637   14706 logs.go:276] 1 containers: [4a3f45afbdb1]
	I0415 05:09:37.679655   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:09:37.679661   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:09:37.715432   14706 logs.go:123] Gathering logs for kube-apiserver [01884ef518c9] ...
	I0415 05:09:37.715444   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01884ef518c9"
	I0415 05:09:37.727691   14706 logs.go:123] Gathering logs for kube-proxy [ee30cb75f716] ...
	I0415 05:09:37.727703   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee30cb75f716"
	I0415 05:09:37.739476   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:09:37.739488   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:09:37.763607   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:09:37.763614   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:09:37.799922   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:09:37.800014   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:09:37.800501   14706 logs.go:123] Gathering logs for kube-scheduler [ca69570f88f3] ...
	I0415 05:09:37.800505   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca69570f88f3"
	I0415 05:09:37.815743   14706 logs.go:123] Gathering logs for kube-controller-manager [44cd3ac0aba4] ...
	I0415 05:09:37.815753   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44cd3ac0aba4"
	I0415 05:09:37.832790   14706 logs.go:123] Gathering logs for kube-controller-manager [2231e4ededcd] ...
	I0415 05:09:37.832800   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2231e4ededcd"
	I0415 05:09:37.844322   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:09:37.844334   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:09:37.856235   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:09:37.856244   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:09:37.861035   14706 logs.go:123] Gathering logs for etcd [def9914ead4d] ...
	I0415 05:09:37.861042   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def9914ead4d"
	I0415 05:09:37.878640   14706 logs.go:123] Gathering logs for etcd [6ff890f8afe9] ...
	I0415 05:09:37.878650   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff890f8afe9"
	I0415 05:09:37.895527   14706 logs.go:123] Gathering logs for coredns [04ae38b68739] ...
	I0415 05:09:37.895538   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04ae38b68739"
	I0415 05:09:37.906680   14706 logs.go:123] Gathering logs for kube-apiserver [34bb719097d2] ...
	I0415 05:09:37.906695   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34bb719097d2"
	I0415 05:09:37.921279   14706 logs.go:123] Gathering logs for kube-scheduler [24b9766e04bb] ...
	I0415 05:09:37.921289   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b9766e04bb"
	I0415 05:09:37.932994   14706 logs.go:123] Gathering logs for storage-provisioner [4a3f45afbdb1] ...
	I0415 05:09:37.933004   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a3f45afbdb1"
	I0415 05:09:37.944188   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:09:37.944197   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:09:37.944222   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:09:37.944226   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:09:37.944230   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:09:37.944234   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:09:37.944236   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:09:47.947948   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:09:52.950048   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:09:52.950485   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:09:52.989358   14706 logs.go:276] 2 containers: [34bb719097d2 01884ef518c9]
	I0415 05:09:52.989519   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:09:53.018230   14706 logs.go:276] 2 containers: [def9914ead4d 6ff890f8afe9]
	I0415 05:09:53.018348   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:09:53.032896   14706 logs.go:276] 1 containers: [04ae38b68739]
	I0415 05:09:53.032957   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:09:53.044560   14706 logs.go:276] 2 containers: [24b9766e04bb ca69570f88f3]
	I0415 05:09:53.044629   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:09:53.054938   14706 logs.go:276] 1 containers: [ee30cb75f716]
	I0415 05:09:53.055001   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:09:53.069525   14706 logs.go:276] 2 containers: [44cd3ac0aba4 2231e4ededcd]
	I0415 05:09:53.069582   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:09:53.080245   14706 logs.go:276] 0 containers: []
	W0415 05:09:53.080259   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:09:53.080307   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:09:53.090675   14706 logs.go:276] 1 containers: [4a3f45afbdb1]
	I0415 05:09:53.090703   14706 logs.go:123] Gathering logs for kube-apiserver [01884ef518c9] ...
	I0415 05:09:53.090709   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01884ef518c9"
	I0415 05:09:53.102742   14706 logs.go:123] Gathering logs for kube-controller-manager [44cd3ac0aba4] ...
	I0415 05:09:53.102753   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44cd3ac0aba4"
	I0415 05:09:53.121086   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:09:53.121096   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:09:53.146200   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:09:53.146210   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:09:53.157493   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:09:53.157503   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:09:53.161642   14706 logs.go:123] Gathering logs for etcd [def9914ead4d] ...
	I0415 05:09:53.161649   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def9914ead4d"
	I0415 05:09:53.176142   14706 logs.go:123] Gathering logs for coredns [04ae38b68739] ...
	I0415 05:09:53.176156   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04ae38b68739"
	I0415 05:09:53.187942   14706 logs.go:123] Gathering logs for kube-scheduler [24b9766e04bb] ...
	I0415 05:09:53.187953   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b9766e04bb"
	I0415 05:09:53.199628   14706 logs.go:123] Gathering logs for kube-proxy [ee30cb75f716] ...
	I0415 05:09:53.199640   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee30cb75f716"
	I0415 05:09:53.211025   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:09:53.211037   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:09:53.249251   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:09:53.249343   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:09:53.249795   14706 logs.go:123] Gathering logs for etcd [6ff890f8afe9] ...
	I0415 05:09:53.249798   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff890f8afe9"
	I0415 05:09:53.267512   14706 logs.go:123] Gathering logs for storage-provisioner [4a3f45afbdb1] ...
	I0415 05:09:53.267522   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a3f45afbdb1"
	I0415 05:09:53.279319   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:09:53.279332   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:09:53.315758   14706 logs.go:123] Gathering logs for kube-apiserver [34bb719097d2] ...
	I0415 05:09:53.315771   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34bb719097d2"
	I0415 05:09:53.330151   14706 logs.go:123] Gathering logs for kube-scheduler [ca69570f88f3] ...
	I0415 05:09:53.330163   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca69570f88f3"
	I0415 05:09:53.345142   14706 logs.go:123] Gathering logs for kube-controller-manager [2231e4ededcd] ...
	I0415 05:09:53.345154   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2231e4ededcd"
	I0415 05:09:53.356370   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:09:53.356379   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:09:53.356405   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:09:53.356411   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:09:53.356418   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:09:53.356422   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:09:53.356443   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:10:03.360298   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:10:08.361360   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:10:08.361607   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:10:08.393972   14706 logs.go:276] 2 containers: [34bb719097d2 01884ef518c9]
	I0415 05:10:08.394154   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:10:08.420892   14706 logs.go:276] 2 containers: [def9914ead4d 6ff890f8afe9]
	I0415 05:10:08.420994   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:10:08.434155   14706 logs.go:276] 1 containers: [04ae38b68739]
	I0415 05:10:08.434251   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:10:08.445406   14706 logs.go:276] 2 containers: [24b9766e04bb ca69570f88f3]
	I0415 05:10:08.445482   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:10:08.455825   14706 logs.go:276] 1 containers: [ee30cb75f716]
	I0415 05:10:08.455898   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:10:08.466561   14706 logs.go:276] 2 containers: [44cd3ac0aba4 2231e4ededcd]
	I0415 05:10:08.466640   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:10:08.476617   14706 logs.go:276] 0 containers: []
	W0415 05:10:08.476627   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:10:08.476688   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:10:08.487283   14706 logs.go:276] 1 containers: [4a3f45afbdb1]
	I0415 05:10:08.487300   14706 logs.go:123] Gathering logs for kube-scheduler [ca69570f88f3] ...
	I0415 05:10:08.487305   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca69570f88f3"
	I0415 05:10:08.507904   14706 logs.go:123] Gathering logs for kube-proxy [ee30cb75f716] ...
	I0415 05:10:08.507914   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee30cb75f716"
	I0415 05:10:08.523052   14706 logs.go:123] Gathering logs for storage-provisioner [4a3f45afbdb1] ...
	I0415 05:10:08.523065   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a3f45afbdb1"
	I0415 05:10:08.534483   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:10:08.534495   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:10:08.546333   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:10:08.546346   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:10:08.583532   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:10:08.583623   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:10:08.584075   14706 logs.go:123] Gathering logs for kube-apiserver [01884ef518c9] ...
	I0415 05:10:08.584080   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01884ef518c9"
	I0415 05:10:08.598949   14706 logs.go:123] Gathering logs for etcd [6ff890f8afe9] ...
	I0415 05:10:08.598961   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff890f8afe9"
	I0415 05:10:08.616297   14706 logs.go:123] Gathering logs for kube-scheduler [24b9766e04bb] ...
	I0415 05:10:08.616308   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b9766e04bb"
	I0415 05:10:08.627660   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:10:08.627670   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:10:08.652278   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:10:08.652285   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:10:08.656294   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:10:08.656301   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:10:08.689791   14706 logs.go:123] Gathering logs for etcd [def9914ead4d] ...
	I0415 05:10:08.689802   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def9914ead4d"
	I0415 05:10:08.705485   14706 logs.go:123] Gathering logs for kube-controller-manager [44cd3ac0aba4] ...
	I0415 05:10:08.705495   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44cd3ac0aba4"
	I0415 05:10:08.722374   14706 logs.go:123] Gathering logs for kube-apiserver [34bb719097d2] ...
	I0415 05:10:08.722383   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34bb719097d2"
	I0415 05:10:08.740143   14706 logs.go:123] Gathering logs for coredns [04ae38b68739] ...
	I0415 05:10:08.740155   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04ae38b68739"
	I0415 05:10:08.754937   14706 logs.go:123] Gathering logs for kube-controller-manager [2231e4ededcd] ...
	I0415 05:10:08.754949   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2231e4ededcd"
	I0415 05:10:08.766396   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:10:08.766406   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:10:08.766431   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:10:08.766436   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:10:08.766440   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:10:08.766445   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:10:08.766447   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:10:18.770205   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:10:23.772318   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:10:23.772766   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:10:23.815952   14706 logs.go:276] 2 containers: [34bb719097d2 01884ef518c9]
	I0415 05:10:23.816116   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:10:23.846833   14706 logs.go:276] 2 containers: [def9914ead4d 6ff890f8afe9]
	I0415 05:10:23.846943   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:10:23.859914   14706 logs.go:276] 1 containers: [04ae38b68739]
	I0415 05:10:23.860005   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:10:23.871953   14706 logs.go:276] 2 containers: [24b9766e04bb ca69570f88f3]
	I0415 05:10:23.872039   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:10:23.882088   14706 logs.go:276] 1 containers: [ee30cb75f716]
	I0415 05:10:23.882160   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:10:23.892907   14706 logs.go:276] 2 containers: [44cd3ac0aba4 2231e4ededcd]
	I0415 05:10:23.892977   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:10:23.903712   14706 logs.go:276] 0 containers: []
	W0415 05:10:23.903726   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:10:23.903800   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:10:23.920821   14706 logs.go:276] 1 containers: [4a3f45afbdb1]
	I0415 05:10:23.920839   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:10:23.920845   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:10:23.958627   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:10:23.958721   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:10:23.959176   14706 logs.go:123] Gathering logs for coredns [04ae38b68739] ...
	I0415 05:10:23.959182   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04ae38b68739"
	I0415 05:10:23.970753   14706 logs.go:123] Gathering logs for kube-scheduler [24b9766e04bb] ...
	I0415 05:10:23.970766   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b9766e04bb"
	I0415 05:10:23.982173   14706 logs.go:123] Gathering logs for kube-controller-manager [2231e4ededcd] ...
	I0415 05:10:23.982183   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2231e4ededcd"
	I0415 05:10:23.993214   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:10:23.993224   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:10:24.004802   14706 logs.go:123] Gathering logs for kube-apiserver [34bb719097d2] ...
	I0415 05:10:24.004813   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34bb719097d2"
	I0415 05:10:24.019119   14706 logs.go:123] Gathering logs for kube-scheduler [ca69570f88f3] ...
	I0415 05:10:24.019128   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca69570f88f3"
	I0415 05:10:24.039996   14706 logs.go:123] Gathering logs for kube-proxy [ee30cb75f716] ...
	I0415 05:10:24.040007   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee30cb75f716"
	I0415 05:10:24.051820   14706 logs.go:123] Gathering logs for kube-controller-manager [44cd3ac0aba4] ...
	I0415 05:10:24.051832   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44cd3ac0aba4"
	I0415 05:10:24.069802   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:10:24.069813   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:10:24.105288   14706 logs.go:123] Gathering logs for etcd [def9914ead4d] ...
	I0415 05:10:24.105297   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def9914ead4d"
	I0415 05:10:24.119414   14706 logs.go:123] Gathering logs for etcd [6ff890f8afe9] ...
	I0415 05:10:24.119424   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff890f8afe9"
	I0415 05:10:24.136077   14706 logs.go:123] Gathering logs for storage-provisioner [4a3f45afbdb1] ...
	I0415 05:10:24.136086   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a3f45afbdb1"
	I0415 05:10:24.148094   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:10:24.148103   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:10:24.152279   14706 logs.go:123] Gathering logs for kube-apiserver [01884ef518c9] ...
	I0415 05:10:24.152285   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01884ef518c9"
	I0415 05:10:24.164316   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:10:24.164326   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:10:24.187496   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:10:24.187505   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:10:24.187528   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:10:24.187532   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:10:24.187548   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:10:24.187553   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:10:24.187556   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:10:34.191311   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:10:39.193446   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:10:39.193735   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:10:39.224259   14706 logs.go:276] 2 containers: [34bb719097d2 01884ef518c9]
	I0415 05:10:39.224404   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:10:39.242943   14706 logs.go:276] 2 containers: [def9914ead4d 6ff890f8afe9]
	I0415 05:10:39.243047   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:10:39.257210   14706 logs.go:276] 1 containers: [04ae38b68739]
	I0415 05:10:39.257300   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:10:39.269473   14706 logs.go:276] 2 containers: [24b9766e04bb ca69570f88f3]
	I0415 05:10:39.269552   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:10:39.279861   14706 logs.go:276] 1 containers: [ee30cb75f716]
	I0415 05:10:39.279946   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:10:39.290506   14706 logs.go:276] 2 containers: [44cd3ac0aba4 2231e4ededcd]
	I0415 05:10:39.290594   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:10:39.301009   14706 logs.go:276] 0 containers: []
	W0415 05:10:39.301020   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:10:39.301086   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:10:39.311372   14706 logs.go:276] 1 containers: [4a3f45afbdb1]
	I0415 05:10:39.311390   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:10:39.311396   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:10:39.346441   14706 logs.go:123] Gathering logs for kube-scheduler [24b9766e04bb] ...
	I0415 05:10:39.346451   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b9766e04bb"
	I0415 05:10:39.358978   14706 logs.go:123] Gathering logs for kube-scheduler [ca69570f88f3] ...
	I0415 05:10:39.358988   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca69570f88f3"
	I0415 05:10:39.374148   14706 logs.go:123] Gathering logs for kube-controller-manager [44cd3ac0aba4] ...
	I0415 05:10:39.374159   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44cd3ac0aba4"
	I0415 05:10:39.391307   14706 logs.go:123] Gathering logs for storage-provisioner [4a3f45afbdb1] ...
	I0415 05:10:39.391319   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a3f45afbdb1"
	I0415 05:10:39.414982   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:10:39.414993   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:10:39.439500   14706 logs.go:123] Gathering logs for kube-apiserver [01884ef518c9] ...
	I0415 05:10:39.439513   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01884ef518c9"
	I0415 05:10:39.451540   14706 logs.go:123] Gathering logs for etcd [def9914ead4d] ...
	I0415 05:10:39.451552   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def9914ead4d"
	I0415 05:10:39.465708   14706 logs.go:123] Gathering logs for coredns [04ae38b68739] ...
	I0415 05:10:39.465719   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04ae38b68739"
	I0415 05:10:39.477184   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:10:39.477195   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:10:39.513159   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:10:39.513252   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:10:39.513706   14706 logs.go:123] Gathering logs for kube-apiserver [34bb719097d2] ...
	I0415 05:10:39.513714   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34bb719097d2"
	I0415 05:10:39.531719   14706 logs.go:123] Gathering logs for etcd [6ff890f8afe9] ...
	I0415 05:10:39.531729   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff890f8afe9"
	I0415 05:10:39.549225   14706 logs.go:123] Gathering logs for kube-proxy [ee30cb75f716] ...
	I0415 05:10:39.549236   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee30cb75f716"
	I0415 05:10:39.560951   14706 logs.go:123] Gathering logs for kube-controller-manager [2231e4ededcd] ...
	I0415 05:10:39.560961   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2231e4ededcd"
	I0415 05:10:39.572046   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:10:39.572058   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:10:39.576363   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:10:39.576369   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:10:39.587900   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:10:39.587910   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:10:39.587936   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:10:39.587940   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:10:39.587943   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:10:39.588007   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:10:39.588013   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:10:49.590927   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:10:54.593304   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:10:54.593846   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:10:54.630745   14706 logs.go:276] 2 containers: [34bb719097d2 01884ef518c9]
	I0415 05:10:54.630919   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:10:54.651707   14706 logs.go:276] 2 containers: [def9914ead4d 6ff890f8afe9]
	I0415 05:10:54.651833   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:10:54.666470   14706 logs.go:276] 1 containers: [04ae38b68739]
	I0415 05:10:54.666562   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:10:54.678917   14706 logs.go:276] 2 containers: [24b9766e04bb ca69570f88f3]
	I0415 05:10:54.679000   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:10:54.689601   14706 logs.go:276] 1 containers: [ee30cb75f716]
	I0415 05:10:54.689687   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:10:54.711588   14706 logs.go:276] 2 containers: [44cd3ac0aba4 2231e4ededcd]
	I0415 05:10:54.711666   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:10:54.721520   14706 logs.go:276] 0 containers: []
	W0415 05:10:54.721533   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:10:54.721606   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:10:54.731896   14706 logs.go:276] 1 containers: [4a3f45afbdb1]
	I0415 05:10:54.731916   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:10:54.731925   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:10:54.767862   14706 logs.go:123] Gathering logs for storage-provisioner [4a3f45afbdb1] ...
	I0415 05:10:54.767875   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a3f45afbdb1"
	I0415 05:10:54.779687   14706 logs.go:123] Gathering logs for kube-controller-manager [44cd3ac0aba4] ...
	I0415 05:10:54.779698   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44cd3ac0aba4"
	I0415 05:10:54.797361   14706 logs.go:123] Gathering logs for kube-controller-manager [2231e4ededcd] ...
	I0415 05:10:54.797371   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2231e4ededcd"
	I0415 05:10:54.808544   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:10:54.808556   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:10:54.844949   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:10:54.845046   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:10:54.845514   14706 logs.go:123] Gathering logs for etcd [def9914ead4d] ...
	I0415 05:10:54.845519   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def9914ead4d"
	I0415 05:10:54.859137   14706 logs.go:123] Gathering logs for etcd [6ff890f8afe9] ...
	I0415 05:10:54.859148   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff890f8afe9"
	I0415 05:10:54.879037   14706 logs.go:123] Gathering logs for kube-scheduler [24b9766e04bb] ...
	I0415 05:10:54.879046   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b9766e04bb"
	I0415 05:10:54.892194   14706 logs.go:123] Gathering logs for kube-apiserver [34bb719097d2] ...
	I0415 05:10:54.892205   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34bb719097d2"
	I0415 05:10:54.905865   14706 logs.go:123] Gathering logs for kube-apiserver [01884ef518c9] ...
	I0415 05:10:54.905875   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01884ef518c9"
	I0415 05:10:54.917439   14706 logs.go:123] Gathering logs for kube-scheduler [ca69570f88f3] ...
	I0415 05:10:54.917450   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca69570f88f3"
	I0415 05:10:54.932619   14706 logs.go:123] Gathering logs for kube-proxy [ee30cb75f716] ...
	I0415 05:10:54.932631   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee30cb75f716"
	I0415 05:10:54.944124   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:10:54.944144   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:10:54.948307   14706 logs.go:123] Gathering logs for coredns [04ae38b68739] ...
	I0415 05:10:54.948312   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04ae38b68739"
	I0415 05:10:54.959043   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:10:54.959053   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:10:54.982433   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:10:54.982442   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:10:54.994262   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:10:54.994275   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:10:54.994316   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:10:54.994322   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:10:54.994325   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:10:54.994330   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:10:54.994333   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:11:04.996725   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:09.998750   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:11:09.998839   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:11:10.011104   14706 logs.go:276] 2 containers: [34bb719097d2 01884ef518c9]
	I0415 05:11:10.011187   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:11:10.021961   14706 logs.go:276] 2 containers: [def9914ead4d 6ff890f8afe9]
	I0415 05:11:10.022045   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:11:10.033391   14706 logs.go:276] 1 containers: [04ae38b68739]
	I0415 05:11:10.033474   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:11:10.044178   14706 logs.go:276] 2 containers: [24b9766e04bb ca69570f88f3]
	I0415 05:11:10.044266   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:11:10.055113   14706 logs.go:276] 1 containers: [ee30cb75f716]
	I0415 05:11:10.055198   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:11:10.066616   14706 logs.go:276] 2 containers: [44cd3ac0aba4 2231e4ededcd]
	I0415 05:11:10.066696   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:11:10.076988   14706 logs.go:276] 0 containers: []
	W0415 05:11:10.076999   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:11:10.077064   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:11:10.087921   14706 logs.go:276] 1 containers: [4a3f45afbdb1]
	I0415 05:11:10.087936   14706 logs.go:123] Gathering logs for coredns [04ae38b68739] ...
	I0415 05:11:10.087943   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04ae38b68739"
	I0415 05:11:10.099598   14706 logs.go:123] Gathering logs for kube-scheduler [ca69570f88f3] ...
	I0415 05:11:10.099610   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca69570f88f3"
	I0415 05:11:10.115326   14706 logs.go:123] Gathering logs for etcd [6ff890f8afe9] ...
	I0415 05:11:10.115336   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff890f8afe9"
	I0415 05:11:10.132898   14706 logs.go:123] Gathering logs for kube-apiserver [01884ef518c9] ...
	I0415 05:11:10.132910   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01884ef518c9"
	I0415 05:11:10.145286   14706 logs.go:123] Gathering logs for kube-scheduler [24b9766e04bb] ...
	I0415 05:11:10.145296   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b9766e04bb"
	I0415 05:11:10.157489   14706 logs.go:123] Gathering logs for kube-controller-manager [2231e4ededcd] ...
	I0415 05:11:10.157501   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2231e4ededcd"
	I0415 05:11:10.169298   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:11:10.169308   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:11:10.193372   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:11:10.193380   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:11:10.233253   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:11:10.233349   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:11:10.233817   14706 logs.go:123] Gathering logs for kube-apiserver [34bb719097d2] ...
	I0415 05:11:10.233823   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34bb719097d2"
	I0415 05:11:10.248720   14706 logs.go:123] Gathering logs for kube-controller-manager [44cd3ac0aba4] ...
	I0415 05:11:10.248731   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44cd3ac0aba4"
	I0415 05:11:10.266643   14706 logs.go:123] Gathering logs for storage-provisioner [4a3f45afbdb1] ...
	I0415 05:11:10.266656   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a3f45afbdb1"
	I0415 05:11:10.281457   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:11:10.281467   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:11:10.287143   14706 logs.go:123] Gathering logs for etcd [def9914ead4d] ...
	I0415 05:11:10.287153   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def9914ead4d"
	I0415 05:11:10.301690   14706 logs.go:123] Gathering logs for kube-proxy [ee30cb75f716] ...
	I0415 05:11:10.301702   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee30cb75f716"
	I0415 05:11:10.322231   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:11:10.322244   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:11:10.335254   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:11:10.335266   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:11:10.373944   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:11:10.373954   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:11:10.373983   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:11:10.373987   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:11:10.373995   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:11:10.374000   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:11:10.374003   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:11:20.377746   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:25.379798   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:11:25.379916   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:11:25.391108   14706 logs.go:276] 2 containers: [34bb719097d2 01884ef518c9]
	I0415 05:11:25.391191   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:11:25.404714   14706 logs.go:276] 2 containers: [def9914ead4d 6ff890f8afe9]
	I0415 05:11:25.404796   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:11:25.415105   14706 logs.go:276] 1 containers: [04ae38b68739]
	I0415 05:11:25.415186   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:11:25.425597   14706 logs.go:276] 2 containers: [24b9766e04bb ca69570f88f3]
	I0415 05:11:25.425673   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:11:25.435972   14706 logs.go:276] 1 containers: [ee30cb75f716]
	I0415 05:11:25.436052   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:11:25.448185   14706 logs.go:276] 2 containers: [44cd3ac0aba4 2231e4ededcd]
	I0415 05:11:25.448262   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:11:25.458926   14706 logs.go:276] 0 containers: []
	W0415 05:11:25.458938   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:11:25.459008   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:11:25.474728   14706 logs.go:276] 1 containers: [4a3f45afbdb1]
	I0415 05:11:25.474744   14706 logs.go:123] Gathering logs for etcd [def9914ead4d] ...
	I0415 05:11:25.474749   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def9914ead4d"
	I0415 05:11:25.489105   14706 logs.go:123] Gathering logs for etcd [6ff890f8afe9] ...
	I0415 05:11:25.489118   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff890f8afe9"
	I0415 05:11:25.506845   14706 logs.go:123] Gathering logs for storage-provisioner [4a3f45afbdb1] ...
	I0415 05:11:25.506857   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a3f45afbdb1"
	I0415 05:11:25.518151   14706 logs.go:123] Gathering logs for kube-apiserver [34bb719097d2] ...
	I0415 05:11:25.518162   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34bb719097d2"
	I0415 05:11:25.532038   14706 logs.go:123] Gathering logs for kube-controller-manager [44cd3ac0aba4] ...
	I0415 05:11:25.532048   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44cd3ac0aba4"
	I0415 05:11:25.549070   14706 logs.go:123] Gathering logs for kube-proxy [ee30cb75f716] ...
	I0415 05:11:25.549081   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee30cb75f716"
	I0415 05:11:25.561082   14706 logs.go:123] Gathering logs for kube-controller-manager [2231e4ededcd] ...
	I0415 05:11:25.561092   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2231e4ededcd"
	I0415 05:11:25.572676   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:11:25.572686   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:11:25.595265   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:11:25.595274   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:11:25.630436   14706 logs.go:123] Gathering logs for coredns [04ae38b68739] ...
	I0415 05:11:25.630451   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04ae38b68739"
	I0415 05:11:25.644805   14706 logs.go:123] Gathering logs for kube-apiserver [01884ef518c9] ...
	I0415 05:11:25.644816   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01884ef518c9"
	I0415 05:11:25.657374   14706 logs.go:123] Gathering logs for kube-scheduler [24b9766e04bb] ...
	I0415 05:11:25.657385   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b9766e04bb"
	I0415 05:11:25.669456   14706 logs.go:123] Gathering logs for kube-scheduler [ca69570f88f3] ...
	I0415 05:11:25.669465   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca69570f88f3"
	I0415 05:11:25.684630   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:11:25.684643   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:11:25.700242   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:11:25.700253   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:11:25.738571   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:11:25.738678   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:11:25.739168   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:11:25.739175   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:11:25.743551   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:11:25.743560   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:11:25.743583   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:11:25.743589   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:11:25.743592   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:11:25.743598   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:11:25.743600   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:11:35.746112   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:40.748453   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:11:40.748686   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:11:40.779292   14706 logs.go:276] 2 containers: [34bb719097d2 01884ef518c9]
	I0415 05:11:40.779437   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:11:40.797832   14706 logs.go:276] 2 containers: [def9914ead4d 6ff890f8afe9]
	I0415 05:11:40.797921   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:11:40.816256   14706 logs.go:276] 1 containers: [04ae38b68739]
	I0415 05:11:40.816342   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:11:40.827706   14706 logs.go:276] 2 containers: [24b9766e04bb ca69570f88f3]
	I0415 05:11:40.827783   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:11:40.842232   14706 logs.go:276] 1 containers: [ee30cb75f716]
	I0415 05:11:40.842314   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:11:40.852692   14706 logs.go:276] 2 containers: [44cd3ac0aba4 2231e4ededcd]
	I0415 05:11:40.852766   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:11:40.863221   14706 logs.go:276] 0 containers: []
	W0415 05:11:40.863234   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:11:40.863301   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:11:40.873950   14706 logs.go:276] 1 containers: [4a3f45afbdb1]
	I0415 05:11:40.873966   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:11:40.873972   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:11:40.886461   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:11:40.886473   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:11:40.890716   14706 logs.go:123] Gathering logs for coredns [04ae38b68739] ...
	I0415 05:11:40.890723   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04ae38b68739"
	I0415 05:11:40.901635   14706 logs.go:123] Gathering logs for kube-controller-manager [44cd3ac0aba4] ...
	I0415 05:11:40.901648   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44cd3ac0aba4"
	I0415 05:11:40.921670   14706 logs.go:123] Gathering logs for kube-controller-manager [2231e4ededcd] ...
	I0415 05:11:40.921681   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2231e4ededcd"
	I0415 05:11:40.933444   14706 logs.go:123] Gathering logs for kube-apiserver [01884ef518c9] ...
	I0415 05:11:40.933456   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01884ef518c9"
	I0415 05:11:40.945671   14706 logs.go:123] Gathering logs for etcd [def9914ead4d] ...
	I0415 05:11:40.945684   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def9914ead4d"
	I0415 05:11:40.959366   14706 logs.go:123] Gathering logs for kube-proxy [ee30cb75f716] ...
	I0415 05:11:40.959374   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee30cb75f716"
	I0415 05:11:40.971165   14706 logs.go:123] Gathering logs for kube-scheduler [ca69570f88f3] ...
	I0415 05:11:40.971174   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca69570f88f3"
	I0415 05:11:40.988058   14706 logs.go:123] Gathering logs for storage-provisioner [4a3f45afbdb1] ...
	I0415 05:11:40.988067   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a3f45afbdb1"
	I0415 05:11:40.999834   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:11:40.999844   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:11:41.036357   14706 logs.go:123] Gathering logs for kube-apiserver [34bb719097d2] ...
	I0415 05:11:41.036369   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34bb719097d2"
	I0415 05:11:41.050259   14706 logs.go:123] Gathering logs for etcd [6ff890f8afe9] ...
	I0415 05:11:41.050271   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff890f8afe9"
	I0415 05:11:41.079712   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:11:41.079722   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:11:41.117653   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:11:41.117748   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:11:41.118229   14706 logs.go:123] Gathering logs for kube-scheduler [24b9766e04bb] ...
	I0415 05:11:41.118236   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b9766e04bb"
	I0415 05:11:41.134468   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:11:41.134479   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:11:41.157245   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:11:41.157256   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:11:41.157280   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:11:41.157284   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:11:41.157289   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:11:41.157305   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:11:41.157311   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:11:51.161056   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:56.163243   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:11:56.163382   14706 kubeadm.go:591] duration metric: took 4m7.815445041s to restartPrimaryControlPlane
	W0415 05:11:56.163461   14706 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0415 05:11:56.163502   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0415 05:11:57.143745   14706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 05:11:57.148977   14706 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 05:11:57.152114   14706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 05:11:57.154893   14706 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 05:11:57.154900   14706 kubeadm.go:156] found existing configuration files:
	
	I0415 05:11:57.154924   14706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/admin.conf
	I0415 05:11:57.157238   14706 kubeadm.go:162] "https://control-plane.minikube.internal:52246" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 05:11:57.157271   14706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 05:11:57.160288   14706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/kubelet.conf
	I0415 05:11:57.163394   14706 kubeadm.go:162] "https://control-plane.minikube.internal:52246" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 05:11:57.163424   14706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 05:11:57.166275   14706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/controller-manager.conf
	I0415 05:11:57.168683   14706 kubeadm.go:162] "https://control-plane.minikube.internal:52246" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 05:11:57.168713   14706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 05:11:57.171857   14706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/scheduler.conf
	I0415 05:11:57.174776   14706 kubeadm.go:162] "https://control-plane.minikube.internal:52246" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 05:11:57.174809   14706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 05:11:57.177242   14706 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0415 05:11:57.193815   14706 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0415 05:11:57.193850   14706 kubeadm.go:309] [preflight] Running pre-flight checks
	I0415 05:11:57.240505   14706 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 05:11:57.240598   14706 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 05:11:57.240647   14706 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 05:11:57.290154   14706 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 05:11:57.296176   14706 out.go:204]   - Generating certificates and keys ...
	I0415 05:11:57.296211   14706 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0415 05:11:57.296241   14706 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0415 05:11:57.296285   14706 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0415 05:11:57.296317   14706 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0415 05:11:57.296359   14706 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0415 05:11:57.296392   14706 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0415 05:11:57.296425   14706 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0415 05:11:57.296460   14706 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0415 05:11:57.296501   14706 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0415 05:11:57.296545   14706 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0415 05:11:57.296565   14706 kubeadm.go:309] [certs] Using the existing "sa" key
	I0415 05:11:57.296597   14706 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 05:11:57.504719   14706 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 05:11:57.607923   14706 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 05:11:57.726682   14706 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 05:11:57.779052   14706 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 05:11:57.812915   14706 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 05:11:57.813259   14706 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 05:11:57.813314   14706 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0415 05:11:57.884399   14706 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 05:11:57.889602   14706 out.go:204]   - Booting up control plane ...
	I0415 05:11:57.889654   14706 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 05:11:57.889720   14706 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 05:11:57.889759   14706 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 05:11:57.889804   14706 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 05:11:57.890545   14706 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0415 05:12:02.394158   14706 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.503538 seconds
	I0415 05:12:02.394253   14706 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 05:12:02.399723   14706 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 05:12:02.911593   14706 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0415 05:12:02.911807   14706 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-921000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 05:12:03.417417   14706 kubeadm.go:309] [bootstrap-token] Using token: 4n0n1e.m5bb0672cxxkia65
	I0415 05:12:03.422944   14706 out.go:204]   - Configuring RBAC rules ...
	I0415 05:12:03.423008   14706 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 05:12:03.423068   14706 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 05:12:03.428341   14706 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 05:12:03.429548   14706 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 05:12:03.430433   14706 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 05:12:03.431361   14706 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 05:12:03.434631   14706 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 05:12:03.611312   14706 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0415 05:12:03.821806   14706 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0415 05:12:03.822285   14706 kubeadm.go:309] 
	I0415 05:12:03.822317   14706 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0415 05:12:03.822321   14706 kubeadm.go:309] 
	I0415 05:12:03.822357   14706 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0415 05:12:03.822360   14706 kubeadm.go:309] 
	I0415 05:12:03.822374   14706 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0415 05:12:03.822411   14706 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 05:12:03.822465   14706 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 05:12:03.822470   14706 kubeadm.go:309] 
	I0415 05:12:03.822498   14706 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0415 05:12:03.822502   14706 kubeadm.go:309] 
	I0415 05:12:03.822533   14706 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 05:12:03.822538   14706 kubeadm.go:309] 
	I0415 05:12:03.822565   14706 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0415 05:12:03.822609   14706 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 05:12:03.822644   14706 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 05:12:03.822647   14706 kubeadm.go:309] 
	I0415 05:12:03.822693   14706 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0415 05:12:03.822741   14706 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0415 05:12:03.822743   14706 kubeadm.go:309] 
	I0415 05:12:03.822782   14706 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4n0n1e.m5bb0672cxxkia65 \
	I0415 05:12:03.822836   14706 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bccfbc6802506c01fbf9bb31ef37a61b86ce5e1e10fb5fd68834dd4a5a1b8043 \
	I0415 05:12:03.822848   14706 kubeadm.go:309] 	--control-plane 
	I0415 05:12:03.822850   14706 kubeadm.go:309] 
	I0415 05:12:03.822890   14706 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0415 05:12:03.822902   14706 kubeadm.go:309] 
	I0415 05:12:03.822940   14706 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4n0n1e.m5bb0672cxxkia65 \
	I0415 05:12:03.823011   14706 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bccfbc6802506c01fbf9bb31ef37a61b86ce5e1e10fb5fd68834dd4a5a1b8043 
	I0415 05:12:03.823067   14706 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 05:12:03.823073   14706 cni.go:84] Creating CNI manager for ""
	I0415 05:12:03.823080   14706 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:12:03.827583   14706 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0415 05:12:03.832563   14706 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0415 05:12:03.835739   14706 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0415 05:12:03.842926   14706 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0415 05:12:03.842997   14706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-921000 minikube.k8s.io/updated_at=2024_04_15T05_12_03_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=fd91a0c5dbcf69c10661a6c45f66c039ce7b5f02 minikube.k8s.io/name=running-upgrade-921000 minikube.k8s.io/primary=true
	I0415 05:12:03.842997   14706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 05:12:03.878280   14706 kubeadm.go:1107] duration metric: took 35.342375ms to wait for elevateKubeSystemPrivileges
	I0415 05:12:03.892625   14706 ops.go:34] apiserver oom_adj: -16
	W0415 05:12:03.892649   14706 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0415 05:12:03.892654   14706 kubeadm.go:393] duration metric: took 4m15.561891875s to StartCluster
	I0415 05:12:03.892664   14706 settings.go:142] acquiring lock: {Name:mkd1677925079a7213680748c8d1dd3d4ca25e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:12:03.893081   14706 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:12:03.893498   14706 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/kubeconfig: {Name:mk14f6faef80a57910c6a54d13a440297d06327c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:12:03.893836   14706 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:12:03.897393   14706 out.go:177] * Verifying Kubernetes components...
	I0415 05:12:03.893860   14706 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0415 05:12:03.894020   14706 config.go:182] Loaded profile config "running-upgrade-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:12:03.903584   14706 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-921000"
	I0415 05:12:03.903589   14706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 05:12:03.903602   14706 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-921000"
	W0415 05:12:03.903607   14706 addons.go:243] addon storage-provisioner should already be in state true
	I0415 05:12:03.903617   14706 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-921000"
	I0415 05:12:03.903621   14706 host.go:66] Checking if "running-upgrade-921000" exists ...
	I0415 05:12:03.903630   14706 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-921000"
	I0415 05:12:03.904811   14706 kapi.go:59] client config for running-upgrade-921000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/client.key", CAFile:"/Users/jenkins/minikube-integration/18644-12307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10216bc80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 05:12:03.905120   14706 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-921000"
	W0415 05:12:03.905125   14706 addons.go:243] addon default-storageclass should already be in state true
	I0415 05:12:03.905133   14706 host.go:66] Checking if "running-upgrade-921000" exists ...
	I0415 05:12:03.909507   14706 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 05:12:03.912573   14706 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 05:12:03.912579   14706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 05:12:03.912585   14706 sshutil.go:53] new ssh client: &{IP:localhost Port:52214 SSHKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/running-upgrade-921000/id_rsa Username:docker}
	I0415 05:12:03.913247   14706 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 05:12:03.913253   14706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 05:12:03.913256   14706 sshutil.go:53] new ssh client: &{IP:localhost Port:52214 SSHKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/running-upgrade-921000/id_rsa Username:docker}
	I0415 05:12:03.980072   14706 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 05:12:03.986308   14706 api_server.go:52] waiting for apiserver process to appear ...
	I0415 05:12:03.986360   14706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 05:12:03.990299   14706 api_server.go:72] duration metric: took 96.454792ms to wait for apiserver process to appear ...
	I0415 05:12:03.990311   14706 api_server.go:88] waiting for apiserver healthz status ...
	I0415 05:12:03.990318   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:04.022164   14706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 05:12:04.022274   14706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 05:12:08.992238   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:08.992287   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:13.992444   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:13.992486   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:18.992620   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:18.992646   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:23.992898   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:23.992930   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:28.993322   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:28.993350   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:33.993907   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:33.993964   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0415 05:12:34.433411   14706 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0415 05:12:34.441171   14706 out.go:177] * Enabled addons: storage-provisioner
	I0415 05:12:34.455823   14706 addons.go:505] duration metric: took 30.56324225s for enable addons: enabled=[storage-provisioner]
	I0415 05:12:38.994730   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:38.994804   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:43.996059   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:43.996079   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:48.996182   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:48.996266   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:53.998352   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:53.998389   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:59.000409   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:59.000432   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:13:04.002403   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:13:04.002517   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:13:04.014215   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:13:04.014325   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:13:04.030381   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:13:04.030469   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:13:04.046516   14706 logs.go:276] 2 containers: [cade86587dff b1165f66174c]
	I0415 05:13:04.046600   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:13:04.061344   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:13:04.061428   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:13:04.071985   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:13:04.072072   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:13:04.082455   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:13:04.082535   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:13:04.092981   14706 logs.go:276] 0 containers: []
	W0415 05:13:04.092995   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:13:04.093063   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:13:04.104754   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:13:04.104769   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:13:04.104774   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:13:04.122440   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:13:04.122452   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:13:04.127139   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:13:04.127145   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:13:04.141052   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:13:04.141063   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:13:04.152604   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:13:04.152618   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:13:04.164656   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:13:04.164667   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:13:04.175964   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:13:04.175976   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:04.199117   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:13:04.199126   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:13:04.210038   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:13:04.210050   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:13:04.228774   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:13:04.228869   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:13:04.247768   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:13:04.247779   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:13:04.285436   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:13:04.285447   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:13:04.299501   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:13:04.299514   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:13:04.314938   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:13:04.314949   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:13:04.326399   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:13:04.326409   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:13:04.326433   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:13:04.326438   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:13:04.326442   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:13:04.326447   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:13:04.326450   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:13:14.330032   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:13:19.332054   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:13:19.332200   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:13:19.343162   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:13:19.343240   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:13:19.361979   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:13:19.362065   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:13:19.372485   14706 logs.go:276] 2 containers: [cade86587dff b1165f66174c]
	I0415 05:13:19.372564   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:13:19.382533   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:13:19.382603   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:13:19.393026   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:13:19.393102   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:13:19.403898   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:13:19.403983   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:13:19.413790   14706 logs.go:276] 0 containers: []
	W0415 05:13:19.413807   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:13:19.413873   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:13:19.423904   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:13:19.423920   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:13:19.423926   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:13:19.428222   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:13:19.428229   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:13:19.462997   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:13:19.463007   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:13:19.474559   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:13:19.474570   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:13:19.489674   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:13:19.489683   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:13:19.501399   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:13:19.501411   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:13:19.512958   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:13:19.512968   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:19.536046   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:13:19.536057   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:13:19.553607   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:13:19.553699   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:13:19.572156   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:13:19.572164   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:13:19.586097   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:13:19.586107   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:13:19.599623   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:13:19.599633   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:13:19.610757   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:13:19.610768   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:13:19.628064   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:13:19.628074   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:13:19.641946   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:13:19.641955   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:13:19.641982   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:13:19.641987   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:13:19.641991   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:13:19.641996   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:13:19.641999   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:13:29.645782   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:13:34.647962   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:13:34.648141   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:13:34.666164   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:13:34.666270   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:13:34.680166   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:13:34.680257   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:13:34.692007   14706 logs.go:276] 2 containers: [cade86587dff b1165f66174c]
	I0415 05:13:34.692086   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:13:34.702275   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:13:34.702345   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:13:34.712716   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:13:34.712787   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:13:34.723387   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:13:34.723467   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:13:34.733714   14706 logs.go:276] 0 containers: []
	W0415 05:13:34.733727   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:13:34.733798   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:13:34.744526   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:13:34.744545   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:13:34.744550   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:13:34.758432   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:13:34.758443   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:13:34.770065   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:13:34.770076   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:13:34.787323   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:13:34.787333   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:34.811043   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:13:34.811053   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:13:34.822873   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:13:34.822884   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:13:34.840229   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:13:34.840322   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:13:34.859171   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:13:34.859178   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:13:34.863359   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:13:34.863366   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:13:34.902584   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:13:34.902594   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:13:34.918218   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:13:34.918228   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:13:34.930484   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:13:34.930494   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:13:34.944951   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:13:34.944963   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:13:34.956143   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:13:34.956154   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:13:34.967654   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:13:34.967665   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:13:34.967689   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:13:34.967694   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:13:34.967697   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:13:34.967701   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:13:34.967727   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:13:44.970119   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:13:49.972635   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:13:49.972861   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:13:50.002500   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:13:50.002645   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:13:50.020648   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:13:50.020759   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:13:50.036030   14706 logs.go:276] 2 containers: [cade86587dff b1165f66174c]
	I0415 05:13:50.036120   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:13:50.048375   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:13:50.048448   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:13:50.059033   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:13:50.059113   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:13:50.069686   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:13:50.069759   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:13:50.080177   14706 logs.go:276] 0 containers: []
	W0415 05:13:50.080190   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:13:50.080261   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:13:50.091069   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:13:50.091083   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:13:50.091088   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:50.115916   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:13:50.115928   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:13:50.127961   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:13:50.127970   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:13:50.132823   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:13:50.132829   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:13:50.144457   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:13:50.144468   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:13:50.156208   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:13:50.156217   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:13:50.174155   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:13:50.174165   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:13:50.185743   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:13:50.185754   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:13:50.197141   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:13:50.197151   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:13:50.215515   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:13:50.215608   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:13:50.233875   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:13:50.233882   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:13:50.269086   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:13:50.269097   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:13:50.283932   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:13:50.283942   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:13:50.297466   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:13:50.297476   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:13:50.313240   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:13:50.313249   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:13:50.313274   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:13:50.313281   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:13:50.313285   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:13:50.313290   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:13:50.313299   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:14:00.317069   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:05.319422   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:05.319691   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:05.348067   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:14:05.348219   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:05.365907   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:14:05.366013   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:05.379767   14706 logs.go:276] 2 containers: [cade86587dff b1165f66174c]
	I0415 05:14:05.379862   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:05.391132   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:14:05.391207   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:05.401524   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:14:05.401613   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:05.420261   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:14:05.420341   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:05.434758   14706 logs.go:276] 0 containers: []
	W0415 05:14:05.434771   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:05.434840   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:05.447349   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:14:05.447363   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:14:05.447368   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:14:05.458885   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:14:05.458893   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:14:05.476267   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:14:05.476277   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:14:05.488702   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:14:05.488711   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:14:05.503737   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:14:05.503747   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:14:05.515893   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:14:05.515904   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:14:05.527589   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:14:05.527600   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:14:05.544689   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:05.544701   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:05.568994   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:14:05.569007   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:05.580124   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:05.580135   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:14:05.597995   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:14:05.598087   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:14:05.616528   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:05.616535   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:05.621140   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:05.621147   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:14:05.656557   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:14:05.656571   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:14:05.671988   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:14:05.671998   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:14:05.672021   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:14:05.672026   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:14:05.672030   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:14:05.672034   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:14:05.672036   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:14:15.674662   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:20.676874   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:20.677223   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:20.710116   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:14:20.710275   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:20.728709   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:14:20.728792   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:20.743119   14706 logs.go:276] 4 containers: [9fa55864ea59 b556fe4e9cb3 cade86587dff b1165f66174c]
	I0415 05:14:20.743203   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:20.755313   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:14:20.755401   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:20.766500   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:14:20.766579   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:20.780557   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:14:20.780636   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:20.791221   14706 logs.go:276] 0 containers: []
	W0415 05:14:20.791231   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:20.791295   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:20.802119   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:14:20.802137   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:20.802142   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:14:20.819859   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:14:20.819951   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:14:20.838603   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:14:20.838610   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:14:20.852255   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:14:20.852265   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:14:20.863598   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:14:20.863609   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:14:20.879077   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:20.879086   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:20.883501   14706 logs.go:123] Gathering logs for coredns [b556fe4e9cb3] ...
	I0415 05:14:20.883508   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b556fe4e9cb3"
	I0415 05:14:20.894432   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:14:20.894442   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:14:20.911585   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:14:20.911598   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:14:20.923181   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:14:20.923193   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:20.935276   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:14:20.935285   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:14:20.956990   14706 logs.go:123] Gathering logs for coredns [9fa55864ea59] ...
	I0415 05:14:20.957002   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa55864ea59"
	I0415 05:14:20.968470   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:14:20.968482   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:14:20.980524   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:14:20.980533   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:14:20.992419   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:20.992429   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:14:21.027234   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:21.027246   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:21.052429   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:14:21.052437   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:14:21.052460   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:14:21.052464   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:14:21.052468   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:14:21.052472   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:14:21.052475   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:14:31.056316   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:36.058764   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:36.058997   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:36.085552   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:14:36.085691   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:36.105810   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:14:36.105912   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:36.118826   14706 logs.go:276] 4 containers: [9fa55864ea59 b556fe4e9cb3 cade86587dff b1165f66174c]
	I0415 05:14:36.118916   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:36.130068   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:14:36.130149   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:36.140878   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:14:36.140953   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:36.151237   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:14:36.151311   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:36.161007   14706 logs.go:276] 0 containers: []
	W0415 05:14:36.161020   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:36.161087   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:36.171325   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:14:36.171345   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:14:36.171350   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:14:36.186670   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:14:36.186681   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:14:36.203871   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:36.203880   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:14:36.221880   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:14:36.221974   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:14:36.240524   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:14:36.240531   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:14:36.261216   14706 logs.go:123] Gathering logs for coredns [b556fe4e9cb3] ...
	I0415 05:14:36.261226   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b556fe4e9cb3"
	I0415 05:14:36.272512   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:14:36.272522   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:14:36.284379   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:36.284390   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:36.289036   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:36.289043   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:14:36.324766   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:14:36.324776   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:14:36.338966   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:36.338975   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:36.362869   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:14:36.362881   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:36.374628   14706 logs.go:123] Gathering logs for coredns [9fa55864ea59] ...
	I0415 05:14:36.374639   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa55864ea59"
	I0415 05:14:36.389907   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:14:36.389917   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:14:36.405930   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:14:36.405941   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:14:36.418224   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:14:36.418237   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:14:36.429669   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:14:36.429679   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:14:36.429708   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:14:36.429712   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:14:36.429716   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:14:36.429720   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:14:36.429723   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:14:46.433538   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:51.435793   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:51.436018   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:51.463215   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:14:51.463352   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:51.485665   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:14:51.485753   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:51.498362   14706 logs.go:276] 4 containers: [9fa55864ea59 b556fe4e9cb3 cade86587dff b1165f66174c]
	I0415 05:14:51.498441   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:51.509483   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:14:51.509555   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:51.519800   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:14:51.519869   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:51.530517   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:14:51.530592   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:51.542320   14706 logs.go:276] 0 containers: []
	W0415 05:14:51.542332   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:51.542400   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:51.553413   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:14:51.553431   14706 logs.go:123] Gathering logs for coredns [9fa55864ea59] ...
	I0415 05:14:51.553440   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa55864ea59"
	I0415 05:14:51.565343   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:14:51.565353   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:14:51.580861   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:51.580872   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:51.604704   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:51.604714   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:14:51.621815   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:14:51.621907   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:14:51.640142   14706 logs.go:123] Gathering logs for coredns [b556fe4e9cb3] ...
	I0415 05:14:51.640147   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b556fe4e9cb3"
	I0415 05:14:51.651589   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:14:51.651602   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:14:51.663358   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:14:51.663368   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:14:51.675259   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:51.675270   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:51.680406   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:14:51.680416   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:14:51.694822   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:14:51.694833   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:14:51.711661   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:14:51.711672   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:51.723731   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:51.723742   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:14:51.758843   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:14:51.758853   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:14:51.777720   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:14:51.777731   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:14:51.789357   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:14:51.789366   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:14:51.813243   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:14:51.813252   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:14:51.813277   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:14:51.813281   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:14:51.813284   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:14:51.813290   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:14:51.813293   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:15:01.816957   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:06.819025   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:06.819193   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:15:06.837920   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:15:06.838006   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:15:06.848628   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:15:06.848715   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:15:06.858852   14706 logs.go:276] 4 containers: [9fa55864ea59 b556fe4e9cb3 cade86587dff b1165f66174c]
	I0415 05:15:06.858934   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:15:06.870807   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:15:06.870879   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:15:06.881808   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:15:06.881889   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:15:06.892798   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:15:06.892874   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:15:06.902890   14706 logs.go:276] 0 containers: []
	W0415 05:15:06.902901   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:15:06.902956   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:15:06.913238   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:15:06.913262   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:15:06.913269   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:15:06.917858   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:15:06.917865   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:15:06.964707   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:15:06.964720   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:15:06.977880   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:15:06.977891   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:15:06.996058   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:15:06.996151   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:15:07.015197   14706 logs.go:123] Gathering logs for coredns [9fa55864ea59] ...
	I0415 05:15:07.015205   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa55864ea59"
	I0415 05:15:07.035800   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:15:07.035813   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:15:07.048725   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:15:07.048736   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:15:07.060174   14706 logs.go:123] Gathering logs for coredns [b556fe4e9cb3] ...
	I0415 05:15:07.060187   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b556fe4e9cb3"
	I0415 05:15:07.072682   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:15:07.072695   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:15:07.089248   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:15:07.089260   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:15:07.109729   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:15:07.109739   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:15:07.124969   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:15:07.124981   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:15:07.141939   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:15:07.141951   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:15:07.159215   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:15:07.159225   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:15:07.184308   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:15:07.184316   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:15:07.202193   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:15:07.202206   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:15:07.202238   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:15:07.202242   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:15:07.202255   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:15:07.202262   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:15:07.202265   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:15:17.205950   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:22.207935   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:22.208048   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:15:22.219558   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:15:22.219645   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:15:22.231784   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:15:22.231863   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:15:22.243369   14706 logs.go:276] 4 containers: [9fa55864ea59 b556fe4e9cb3 cade86587dff b1165f66174c]
	I0415 05:15:22.243449   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:15:22.256135   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:15:22.256218   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:15:22.267446   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:15:22.267532   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:15:22.279343   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:15:22.279447   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:15:22.290984   14706 logs.go:276] 0 containers: []
	W0415 05:15:22.290997   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:15:22.291070   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:15:22.302905   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:15:22.302926   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:15:22.302931   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:15:22.328152   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:15:22.328168   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:15:22.348035   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:15:22.348051   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:15:22.362121   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:15:22.362133   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:15:22.367498   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:15:22.367511   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:15:22.410213   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:15:22.410225   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:15:22.424960   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:15:22.424973   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:15:22.437037   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:15:22.437048   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:15:22.449125   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:15:22.449137   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:15:22.461183   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:15:22.461196   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:15:22.481941   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:15:22.481955   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:15:22.494512   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:15:22.494527   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:15:22.514802   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:15:22.514901   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:15:22.534341   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:15:22.534360   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:15:22.549015   14706 logs.go:123] Gathering logs for coredns [9fa55864ea59] ...
	I0415 05:15:22.549029   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa55864ea59"
	I0415 05:15:22.564443   14706 logs.go:123] Gathering logs for coredns [b556fe4e9cb3] ...
	I0415 05:15:22.564454   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b556fe4e9cb3"
	I0415 05:15:22.576349   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:15:22.576359   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:15:22.576390   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:15:22.576396   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:15:22.576399   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:15:22.576403   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:15:22.576405   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:15:32.578563   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:37.580639   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:37.580786   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:15:37.593914   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:15:37.593996   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:15:37.605210   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:15:37.605310   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:15:37.616283   14706 logs.go:276] 4 containers: [9fa55864ea59 b556fe4e9cb3 cade86587dff b1165f66174c]
	I0415 05:15:37.616367   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:15:37.626741   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:15:37.626824   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:15:37.637712   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:15:37.637785   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:15:37.647808   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:15:37.647885   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:15:37.658041   14706 logs.go:276] 0 containers: []
	W0415 05:15:37.658053   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:15:37.658127   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:15:37.674336   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:15:37.674352   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:15:37.674358   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:15:37.680805   14706 logs.go:123] Gathering logs for coredns [9fa55864ea59] ...
	I0415 05:15:37.680816   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa55864ea59"
	I0415 05:15:37.692410   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:15:37.692420   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:15:37.703507   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:15:37.703517   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:15:37.715344   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:15:37.715353   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:15:37.738647   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:15:37.738656   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:15:37.755686   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:15:37.755697   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:15:37.769555   14706 logs.go:123] Gathering logs for coredns [b556fe4e9cb3] ...
	I0415 05:15:37.769565   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b556fe4e9cb3"
	I0415 05:15:37.781071   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:15:37.781081   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:15:37.796197   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:15:37.796207   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:15:37.808183   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:15:37.808193   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:15:37.825676   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:15:37.825769   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:15:37.844329   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:15:37.844337   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:15:37.880561   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:15:37.880572   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:15:37.896007   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:15:37.896019   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:15:37.917208   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:15:37.917218   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:15:37.929347   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:15:37.929358   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:15:37.929383   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:15:37.929388   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:15:37.929392   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:15:37.929396   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:15:37.929398   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:15:47.933126   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:52.935170   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:52.935291   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:15:52.951822   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:15:52.951909   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:15:52.962431   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:15:52.962511   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:15:52.973720   14706 logs.go:276] 4 containers: [9fa55864ea59 b556fe4e9cb3 cade86587dff b1165f66174c]
	I0415 05:15:52.973810   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:15:52.984465   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:15:52.984544   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:15:52.995127   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:15:52.995203   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:15:53.006048   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:15:53.006136   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:15:53.016234   14706 logs.go:276] 0 containers: []
	W0415 05:15:53.016244   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:15:53.016314   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:15:53.026805   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:15:53.026824   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:15:53.026830   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:15:53.061356   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:15:53.061368   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:15:53.073972   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:15:53.073983   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:15:53.085625   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:15:53.085637   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:15:53.107397   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:15:53.107408   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:15:53.125189   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:15:53.125199   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:15:53.145085   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:15:53.145177   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:15:53.164510   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:15:53.164533   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:15:53.170883   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:15:53.170896   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:15:53.195576   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:15:53.195588   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:15:53.209721   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:15:53.209732   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:15:53.223805   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:15:53.223817   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:15:53.235855   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:15:53.235865   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:15:53.247717   14706 logs.go:123] Gathering logs for coredns [9fa55864ea59] ...
	I0415 05:15:53.247728   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa55864ea59"
	I0415 05:15:53.259720   14706 logs.go:123] Gathering logs for coredns [b556fe4e9cb3] ...
	I0415 05:15:53.259731   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b556fe4e9cb3"
	I0415 05:15:53.271522   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:15:53.271532   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:15:53.294534   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:15:53.294543   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:15:53.294571   14706 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 05:15:53.294576   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:15:53.294579   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	  Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:15:53.294584   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:15:53.294586   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:16:03.297827   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:16:08.300022   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:16:08.306322   14706 out.go:177] 
	W0415 05:16:08.311310   14706 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0415 05:16:08.311322   14706 out.go:239] * 
	* 
	W0415 05:16:08.312310   14706 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:16:08.323213   14706 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-921000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-04-15 05:16:08.397304 -0700 PDT m=+1371.399592626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-921000 -n running-upgrade-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-921000 -n running-upgrade-921000: exit status 2 (15.556535958s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-921000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p force-systemd-flag-287000          | force-systemd-flag-287000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 05:06 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | force-systemd-env-608000              | force-systemd-env-608000  | jenkins | v1.33.0-beta.0 | 15 Apr 24 05:06 PDT |                     |
	|         | ssh docker info --format              |                           |         |                |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |                |                     |                     |
	| delete  | -p force-systemd-env-608000           | force-systemd-env-608000  | jenkins | v1.33.0-beta.0 | 15 Apr 24 05:06 PDT | 15 Apr 24 05:06 PDT |
	| start   | -p docker-flags-299000                | docker-flags-299000       | jenkins | v1.33.0-beta.0 | 15 Apr 24 05:06 PDT |                     |
	|         | --cache-images=false                  |                           |         |                |                     |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --install-addons=false                |                           |         |                |                     |                     |
	|         | --wait=false                          |                           |         |                |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |                |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |                |                     |                     |
	|         | --docker-opt=debug                    |                           |         |                |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | force-systemd-flag-287000             | force-systemd-flag-287000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 05:06 PDT |                     |
	|         | ssh docker info --format              |                           |         |                |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |                |                     |                     |
	| delete  | -p force-systemd-flag-287000          | force-systemd-flag-287000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 05:06 PDT | 15 Apr 24 05:06 PDT |
	| start   | -p cert-expiration-169000             | cert-expiration-169000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 05:06 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | docker-flags-299000 ssh               | docker-flags-299000       | jenkins | v1.33.0-beta.0 | 15 Apr 24 05:06 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |                |                     |                     |
	|         | --property=Environment                |                           |         |                |                     |                     |
	|         | --no-pager                            |                           |         |                |                     |                     |
	| ssh     | docker-flags-299000 ssh               | docker-flags-299000       | jenkins | v1.33.0-beta.0 | 15 Apr 24 05:06 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |                |                     |                     |
	|         | --property=ExecStart                  |                           |         |                |                     |                     |
	|         | --no-pager                            |                           |         |                |                     |                     |
	| delete  | -p docker-flags-299000                | docker-flags-299000       | jenkins | v1.33.0-beta.0 | 15 Apr 24 05:06 PDT | 15 Apr 24 05:06 PDT |
	| start   | -p cert-options-218000                | cert-options-218000       | jenkins | v1.33.0-beta.0 | 15 Apr 24 05:06 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |                |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |                |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |                |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | cert-options-218000 ssh               | cert-options-218000       | jenkins | v1.33.0-beta.0 | 15 Apr 24 05:06 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |                |                     |                     |
	| ssh     | -p cert-options-218000 -- sudo        | cert-options-218000       | jenkins | v1.33.0-beta.0 | 15 Apr 24 05:06 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |                |                     |                     |
	| delete  | -p cert-options-218000                | cert-options-218000       | jenkins | v1.33.0-beta.0 | 15 Apr 24 05:06 PDT | 15 Apr 24 05:06 PDT |
	| start   | -p running-upgrade-921000             | minikube                  | jenkins | v1.26.0        | 15 Apr 24 05:06 PDT | 15 Apr 24 05:07 PDT |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |                |                     |                     |
	| start   | -p running-upgrade-921000             | running-upgrade-921000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 05:07 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| start   | -p cert-expiration-169000             | cert-expiration-169000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 05:09 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| delete  | -p cert-expiration-169000             | cert-expiration-169000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 05:09 PDT | 15 Apr 24 05:09 PDT |
	| start   | -p kubernetes-upgrade-487000          | kubernetes-upgrade-487000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 05:09 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-487000          | kubernetes-upgrade-487000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 05:09 PDT | 15 Apr 24 05:09 PDT |
	| start   | -p kubernetes-upgrade-487000          | kubernetes-upgrade-487000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 05:09 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2     |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| delete  | -p kubernetes-upgrade-487000          | kubernetes-upgrade-487000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 05:10 PDT | 15 Apr 24 05:10 PDT |
	| start   | -p stopped-upgrade-413000             | minikube                  | jenkins | v1.26.0        | 15 Apr 24 05:10 PDT | 15 Apr 24 05:10 PDT |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |                |                     |                     |
	| stop    | stopped-upgrade-413000 stop           | minikube                  | jenkins | v1.26.0        | 15 Apr 24 05:10 PDT | 15 Apr 24 05:10 PDT |
	| start   | -p stopped-upgrade-413000             | stopped-upgrade-413000    | jenkins | v1.33.0-beta.0 | 15 Apr 24 05:10 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 05:10:50
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 05:10:50.161525   14863 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:10:50.161682   14863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:10:50.161687   14863 out.go:304] Setting ErrFile to fd 2...
	I0415 05:10:50.161690   14863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:10:50.161830   14863 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:10:50.163027   14863 out.go:298] Setting JSON to false
	I0415 05:10:50.183381   14863 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7821,"bootTime":1713175229,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:10:50.183467   14863 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:10:50.188005   14863 out.go:177] * [stopped-upgrade-413000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:10:50.195998   14863 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:10:50.196096   14863 notify.go:220] Checking for updates...
	I0415 05:10:50.202873   14863 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:10:50.205952   14863 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:10:50.208949   14863 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:10:50.211919   14863 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:10:50.214903   14863 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:10:50.216313   14863 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:10:50.219855   14863 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0415 05:10:50.223869   14863 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:10:50.227889   14863 out.go:177] * Using the qemu2 driver based on existing profile
	I0415 05:10:50.234903   14863 start.go:297] selected driver: qemu2
	I0415 05:10:50.234909   14863 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52436 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0415 05:10:50.234958   14863 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:10:50.237530   14863 cni.go:84] Creating CNI manager for ""
	I0415 05:10:50.237551   14863 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:10:50.237578   14863 start.go:340] cluster config:
	{Name:stopped-upgrade-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52436 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0415 05:10:50.237634   14863 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:10:50.244890   14863 out.go:177] * Starting "stopped-upgrade-413000" primary control-plane node in "stopped-upgrade-413000" cluster
	I0415 05:10:50.247874   14863 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0415 05:10:50.247889   14863 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0415 05:10:50.247894   14863 cache.go:56] Caching tarball of preloaded images
	I0415 05:10:50.247945   14863 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:10:50.247949   14863 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0415 05:10:50.247987   14863 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/config.json ...
	I0415 05:10:50.248256   14863 start.go:360] acquireMachinesLock for stopped-upgrade-413000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:10:50.248295   14863 start.go:364] duration metric: took 33.666µs to acquireMachinesLock for "stopped-upgrade-413000"
	I0415 05:10:50.248303   14863 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:10:50.248309   14863 fix.go:54] fixHost starting: 
	I0415 05:10:50.248413   14863 fix.go:112] recreateIfNeeded on stopped-upgrade-413000: state=Stopped err=<nil>
	W0415 05:10:50.248421   14863 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:10:50.254836   14863 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-413000" ...
	I0415 05:10:49.590927   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:10:50.258959   14863 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/stopped-upgrade-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/stopped-upgrade-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/stopped-upgrade-413000/qemu.pid -nic user,model=virtio,hostfwd=tcp::52403-:22,hostfwd=tcp::52404-:2376,hostname=stopped-upgrade-413000 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/stopped-upgrade-413000/disk.qcow2
	I0415 05:10:50.306142   14863 main.go:141] libmachine: STDOUT: 
	I0415 05:10:50.306173   14863 main.go:141] libmachine: STDERR: 
	I0415 05:10:50.306183   14863 main.go:141] libmachine: Waiting for VM to start (ssh -p 52403 docker@127.0.0.1)...
	I0415 05:10:54.593304   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:10:54.593846   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:10:54.630745   14706 logs.go:276] 2 containers: [34bb719097d2 01884ef518c9]
	I0415 05:10:54.630919   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:10:54.651707   14706 logs.go:276] 2 containers: [def9914ead4d 6ff890f8afe9]
	I0415 05:10:54.651833   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:10:54.666470   14706 logs.go:276] 1 containers: [04ae38b68739]
	I0415 05:10:54.666562   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:10:54.678917   14706 logs.go:276] 2 containers: [24b9766e04bb ca69570f88f3]
	I0415 05:10:54.679000   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:10:54.689601   14706 logs.go:276] 1 containers: [ee30cb75f716]
	I0415 05:10:54.689687   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:10:54.711588   14706 logs.go:276] 2 containers: [44cd3ac0aba4 2231e4ededcd]
	I0415 05:10:54.711666   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:10:54.721520   14706 logs.go:276] 0 containers: []
	W0415 05:10:54.721533   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:10:54.721606   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:10:54.731896   14706 logs.go:276] 1 containers: [4a3f45afbdb1]
	I0415 05:10:54.731916   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:10:54.731925   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:10:54.767862   14706 logs.go:123] Gathering logs for storage-provisioner [4a3f45afbdb1] ...
	I0415 05:10:54.767875   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a3f45afbdb1"
	I0415 05:10:54.779687   14706 logs.go:123] Gathering logs for kube-controller-manager [44cd3ac0aba4] ...
	I0415 05:10:54.779698   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44cd3ac0aba4"
	I0415 05:10:54.797361   14706 logs.go:123] Gathering logs for kube-controller-manager [2231e4ededcd] ...
	I0415 05:10:54.797371   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2231e4ededcd"
	I0415 05:10:54.808544   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:10:54.808556   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:10:54.844949   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:10:54.845046   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:10:54.845514   14706 logs.go:123] Gathering logs for etcd [def9914ead4d] ...
	I0415 05:10:54.845519   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def9914ead4d"
	I0415 05:10:54.859137   14706 logs.go:123] Gathering logs for etcd [6ff890f8afe9] ...
	I0415 05:10:54.859148   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff890f8afe9"
	I0415 05:10:54.879037   14706 logs.go:123] Gathering logs for kube-scheduler [24b9766e04bb] ...
	I0415 05:10:54.879046   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b9766e04bb"
	I0415 05:10:54.892194   14706 logs.go:123] Gathering logs for kube-apiserver [34bb719097d2] ...
	I0415 05:10:54.892205   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34bb719097d2"
	I0415 05:10:54.905865   14706 logs.go:123] Gathering logs for kube-apiserver [01884ef518c9] ...
	I0415 05:10:54.905875   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01884ef518c9"
	I0415 05:10:54.917439   14706 logs.go:123] Gathering logs for kube-scheduler [ca69570f88f3] ...
	I0415 05:10:54.917450   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca69570f88f3"
	I0415 05:10:54.932619   14706 logs.go:123] Gathering logs for kube-proxy [ee30cb75f716] ...
	I0415 05:10:54.932631   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee30cb75f716"
	I0415 05:10:54.944124   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:10:54.944144   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:10:54.948307   14706 logs.go:123] Gathering logs for coredns [04ae38b68739] ...
	I0415 05:10:54.948312   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04ae38b68739"
	I0415 05:10:54.959043   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:10:54.959053   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:10:54.982433   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:10:54.982442   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:10:54.994262   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:10:54.994275   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:10:54.994316   14706 out.go:239] X Problems detected in kubelet:
	W0415 05:10:54.994322   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:10:54.994325   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:10:54.994330   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:10:54.994333   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:11:04.996725   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:09.888607   14863 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/config.json ...
	I0415 05:11:09.889136   14863 machine.go:94] provisionDockerMachine start ...
	I0415 05:11:09.889639   14863 main.go:141] libmachine: Using SSH client type: native
	I0415 05:11:09.889904   14863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10299dc80] 0x1029a04e0 <nil>  [] 0s} localhost 52403 <nil> <nil>}
	I0415 05:11:09.889919   14863 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 05:11:09.957887   14863 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 05:11:09.957918   14863 buildroot.go:166] provisioning hostname "stopped-upgrade-413000"
	I0415 05:11:09.958002   14863 main.go:141] libmachine: Using SSH client type: native
	I0415 05:11:09.958188   14863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10299dc80] 0x1029a04e0 <nil>  [] 0s} localhost 52403 <nil> <nil>}
	I0415 05:11:09.958196   14863 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-413000 && echo "stopped-upgrade-413000" | sudo tee /etc/hostname
	I0415 05:11:10.024814   14863 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-413000
	
	I0415 05:11:10.024867   14863 main.go:141] libmachine: Using SSH client type: native
	I0415 05:11:10.024994   14863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10299dc80] 0x1029a04e0 <nil>  [] 0s} localhost 52403 <nil> <nil>}
	I0415 05:11:10.025004   14863 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-413000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-413000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-413000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 05:11:10.086216   14863 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 05:11:10.086229   14863 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18644-12307/.minikube CaCertPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18644-12307/.minikube}
	I0415 05:11:10.086237   14863 buildroot.go:174] setting up certificates
	I0415 05:11:10.086242   14863 provision.go:84] configureAuth start
	I0415 05:11:10.086246   14863 provision.go:143] copyHostCerts
	I0415 05:11:10.086351   14863 exec_runner.go:144] found /Users/jenkins/minikube-integration/18644-12307/.minikube/key.pem, removing ...
	I0415 05:11:10.086359   14863 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18644-12307/.minikube/key.pem
	I0415 05:11:10.086468   14863 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18644-12307/.minikube/key.pem (1675 bytes)
	I0415 05:11:10.086658   14863 exec_runner.go:144] found /Users/jenkins/minikube-integration/18644-12307/.minikube/ca.pem, removing ...
	I0415 05:11:10.086663   14863 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18644-12307/.minikube/ca.pem
	I0415 05:11:10.086719   14863 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18644-12307/.minikube/ca.pem (1078 bytes)
	I0415 05:11:10.086826   14863 exec_runner.go:144] found /Users/jenkins/minikube-integration/18644-12307/.minikube/cert.pem, removing ...
	I0415 05:11:10.086831   14863 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18644-12307/.minikube/cert.pem
	I0415 05:11:10.086887   14863 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18644-12307/.minikube/cert.pem (1123 bytes)
	I0415 05:11:10.086983   14863 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-413000 san=[127.0.0.1 localhost minikube stopped-upgrade-413000]
	I0415 05:11:09.998750   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:11:09.998839   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:11:10.011104   14706 logs.go:276] 2 containers: [34bb719097d2 01884ef518c9]
	I0415 05:11:10.011187   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:11:10.021961   14706 logs.go:276] 2 containers: [def9914ead4d 6ff890f8afe9]
	I0415 05:11:10.022045   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:11:10.033391   14706 logs.go:276] 1 containers: [04ae38b68739]
	I0415 05:11:10.033474   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:11:10.044178   14706 logs.go:276] 2 containers: [24b9766e04bb ca69570f88f3]
	I0415 05:11:10.044266   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:11:10.055113   14706 logs.go:276] 1 containers: [ee30cb75f716]
	I0415 05:11:10.055198   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:11:10.066616   14706 logs.go:276] 2 containers: [44cd3ac0aba4 2231e4ededcd]
	I0415 05:11:10.066696   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:11:10.076988   14706 logs.go:276] 0 containers: []
	W0415 05:11:10.076999   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:11:10.077064   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:11:10.087921   14706 logs.go:276] 1 containers: [4a3f45afbdb1]
	I0415 05:11:10.087936   14706 logs.go:123] Gathering logs for coredns [04ae38b68739] ...
	I0415 05:11:10.087943   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04ae38b68739"
	I0415 05:11:10.099598   14706 logs.go:123] Gathering logs for kube-scheduler [ca69570f88f3] ...
	I0415 05:11:10.099610   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca69570f88f3"
	I0415 05:11:10.115326   14706 logs.go:123] Gathering logs for etcd [6ff890f8afe9] ...
	I0415 05:11:10.115336   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff890f8afe9"
	I0415 05:11:10.132898   14706 logs.go:123] Gathering logs for kube-apiserver [01884ef518c9] ...
	I0415 05:11:10.132910   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01884ef518c9"
	I0415 05:11:10.145286   14706 logs.go:123] Gathering logs for kube-scheduler [24b9766e04bb] ...
	I0415 05:11:10.145296   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b9766e04bb"
	I0415 05:11:10.157489   14706 logs.go:123] Gathering logs for kube-controller-manager [2231e4ededcd] ...
	I0415 05:11:10.157501   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2231e4ededcd"
	I0415 05:11:10.169298   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:11:10.169308   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:11:10.193372   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:11:10.193380   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:11:10.233253   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:11:10.233349   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:11:10.233817   14706 logs.go:123] Gathering logs for kube-apiserver [34bb719097d2] ...
	I0415 05:11:10.233823   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34bb719097d2"
	I0415 05:11:10.248720   14706 logs.go:123] Gathering logs for kube-controller-manager [44cd3ac0aba4] ...
	I0415 05:11:10.248731   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44cd3ac0aba4"
	I0415 05:11:10.266643   14706 logs.go:123] Gathering logs for storage-provisioner [4a3f45afbdb1] ...
	I0415 05:11:10.266656   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a3f45afbdb1"
	I0415 05:11:10.281457   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:11:10.281467   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:11:10.287143   14706 logs.go:123] Gathering logs for etcd [def9914ead4d] ...
	I0415 05:11:10.287153   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def9914ead4d"
	I0415 05:11:10.301690   14706 logs.go:123] Gathering logs for kube-proxy [ee30cb75f716] ...
	I0415 05:11:10.301702   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee30cb75f716"
	I0415 05:11:10.322231   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:11:10.322244   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:11:10.335254   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:11:10.335266   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:11:10.373944   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:11:10.373954   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:11:10.373983   14706 out.go:239] X Problems detected in kubelet:
	W0415 05:11:10.373987   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:11:10.373995   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:11:10.374000   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:11:10.374003   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:11:10.244448   14863 provision.go:177] copyRemoteCerts
	I0415 05:11:10.244504   14863 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 05:11:10.244514   14863 sshutil.go:53] new ssh client: &{IP:localhost Port:52403 SSHKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	I0415 05:11:10.277731   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0415 05:11:10.285831   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0415 05:11:10.294106   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0415 05:11:10.302132   14863 provision.go:87] duration metric: took 215.884458ms to configureAuth
	I0415 05:11:10.302144   14863 buildroot.go:189] setting minikube options for container-runtime
	I0415 05:11:10.302286   14863 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:11:10.302324   14863 main.go:141] libmachine: Using SSH client type: native
	I0415 05:11:10.302420   14863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10299dc80] 0x1029a04e0 <nil>  [] 0s} localhost 52403 <nil> <nil>}
	I0415 05:11:10.302426   14863 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 05:11:10.364345   14863 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 05:11:10.364359   14863 buildroot.go:70] root file system type: tmpfs
	I0415 05:11:10.364415   14863 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 05:11:10.364488   14863 main.go:141] libmachine: Using SSH client type: native
	I0415 05:11:10.364617   14863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10299dc80] 0x1029a04e0 <nil>  [] 0s} localhost 52403 <nil> <nil>}
	I0415 05:11:10.364652   14863 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 05:11:10.427080   14863 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 05:11:10.427141   14863 main.go:141] libmachine: Using SSH client type: native
	I0415 05:11:10.427260   14863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10299dc80] 0x1029a04e0 <nil>  [] 0s} localhost 52403 <nil> <nil>}
	I0415 05:11:10.427268   14863 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 05:11:10.766343   14863 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 05:11:10.766359   14863 machine.go:97] duration metric: took 877.249833ms to provisionDockerMachine
	I0415 05:11:10.766367   14863 start.go:293] postStartSetup for "stopped-upgrade-413000" (driver="qemu2")
	I0415 05:11:10.766374   14863 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 05:11:10.766435   14863 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 05:11:10.766445   14863 sshutil.go:53] new ssh client: &{IP:localhost Port:52403 SSHKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	I0415 05:11:10.797630   14863 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 05:11:10.799058   14863 info.go:137] Remote host: Buildroot 2021.02.12
	I0415 05:11:10.799066   14863 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18644-12307/.minikube/addons for local assets ...
	I0415 05:11:10.799158   14863 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18644-12307/.minikube/files for local assets ...
	I0415 05:11:10.799281   14863 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18644-12307/.minikube/files/etc/ssl/certs/127152.pem -> 127152.pem in /etc/ssl/certs
	I0415 05:11:10.799408   14863 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 05:11:10.802201   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/files/etc/ssl/certs/127152.pem --> /etc/ssl/certs/127152.pem (1708 bytes)
	I0415 05:11:10.809385   14863 start.go:296] duration metric: took 43.015209ms for postStartSetup
	I0415 05:11:10.809399   14863 fix.go:56] duration metric: took 20.561946916s for fixHost
	I0415 05:11:10.809439   14863 main.go:141] libmachine: Using SSH client type: native
	I0415 05:11:10.809537   14863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10299dc80] 0x1029a04e0 <nil>  [] 0s} localhost 52403 <nil> <nil>}
	I0415 05:11:10.809542   14863 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 05:11:10.867125   14863 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713183070.489905420
	
	I0415 05:11:10.867133   14863 fix.go:216] guest clock: 1713183070.489905420
	I0415 05:11:10.867138   14863 fix.go:229] Guest: 2024-04-15 05:11:10.48990542 -0700 PDT Remote: 2024-04-15 05:11:10.809401 -0700 PDT m=+20.683534668 (delta=-319.49558ms)
	I0415 05:11:10.867150   14863 fix.go:200] guest clock delta is within tolerance: -319.49558ms
	I0415 05:11:10.867152   14863 start.go:83] releasing machines lock for "stopped-upgrade-413000", held for 20.619710708s
	I0415 05:11:10.867222   14863 ssh_runner.go:195] Run: cat /version.json
	I0415 05:11:10.867224   14863 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 05:11:10.867230   14863 sshutil.go:53] new ssh client: &{IP:localhost Port:52403 SSHKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	I0415 05:11:10.867241   14863 sshutil.go:53] new ssh client: &{IP:localhost Port:52403 SSHKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	W0415 05:11:10.867899   14863 sshutil.go:64] dial failure (will retry): dial tcp [::1]:52403: connect: connection refused
	I0415 05:11:10.867923   14863 retry.go:31] will retry after 144.831337ms: dial tcp [::1]:52403: connect: connection refused
	W0415 05:11:10.899128   14863 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0415 05:11:10.899187   14863 ssh_runner.go:195] Run: systemctl --version
	I0415 05:11:10.901621   14863 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 05:11:10.903077   14863 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 05:11:10.903110   14863 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0415 05:11:10.906095   14863 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0415 05:11:10.910694   14863 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 05:11:10.910701   14863 start.go:494] detecting cgroup driver to use...
	I0415 05:11:10.910773   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 05:11:10.916702   14863 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0415 05:11:10.919506   14863 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 05:11:10.922434   14863 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 05:11:10.922466   14863 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 05:11:10.925486   14863 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 05:11:10.928628   14863 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 05:11:10.931352   14863 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 05:11:10.934288   14863 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 05:11:10.937438   14863 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 05:11:10.940523   14863 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 05:11:10.943170   14863 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 05:11:10.946218   14863 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 05:11:10.949140   14863 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 05:11:10.951642   14863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 05:11:11.020550   14863 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 05:11:11.031013   14863 start.go:494] detecting cgroup driver to use...
	I0415 05:11:11.031086   14863 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 05:11:11.043238   14863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 05:11:11.088543   14863 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 05:11:11.095069   14863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 05:11:11.099582   14863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 05:11:11.104203   14863 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 05:11:11.149760   14863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 05:11:11.155348   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 05:11:11.160987   14863 ssh_runner.go:195] Run: which cri-dockerd
	I0415 05:11:11.162152   14863 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 05:11:11.165050   14863 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 05:11:11.170024   14863 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 05:11:11.231397   14863 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 05:11:11.292022   14863 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 05:11:11.292085   14863 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 05:11:11.297429   14863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 05:11:11.373638   14863 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 05:11:12.483764   14863 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.110144625s)
	I0415 05:11:12.483837   14863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0415 05:11:12.488658   14863 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0415 05:11:12.494981   14863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 05:11:12.499955   14863 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0415 05:11:12.576851   14863 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0415 05:11:12.652972   14863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 05:11:12.728330   14863 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0415 05:11:12.734244   14863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 05:11:12.739206   14863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 05:11:12.799167   14863 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0415 05:11:12.839855   14863 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0415 05:11:12.841029   14863 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0415 05:11:12.842926   14863 start.go:562] Will wait 60s for crictl version
	I0415 05:11:12.842959   14863 ssh_runner.go:195] Run: which crictl
	I0415 05:11:12.844213   14863 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 05:11:12.859065   14863 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0415 05:11:12.859151   14863 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 05:11:12.875657   14863 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 05:11:12.895995   14863 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0415 05:11:12.896068   14863 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0415 05:11:12.897290   14863 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 05:11:12.900828   14863 kubeadm.go:877] updating cluster {Name:stopped-upgrade-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52436 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0415 05:11:12.900870   14863 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0415 05:11:12.900942   14863 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 05:11:12.911580   14863 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 05:11:12.911588   14863 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0415 05:11:12.911639   14863 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 05:11:12.914559   14863 ssh_runner.go:195] Run: which lz4
	I0415 05:11:12.915924   14863 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0415 05:11:12.917108   14863 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0415 05:11:12.917130   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0415 05:11:13.613022   14863 docker.go:649] duration metric: took 697.160417ms to copy over tarball
	I0415 05:11:13.613093   14863 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0415 05:11:14.869890   14863 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.256833583s)
	I0415 05:11:14.869903   14863 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0415 05:11:14.885862   14863 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 05:11:14.889653   14863 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0415 05:11:14.894663   14863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 05:11:14.971969   14863 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 05:11:16.716827   14863 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.744911667s)
	I0415 05:11:16.716924   14863 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 05:11:16.730940   14863 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 05:11:16.730948   14863 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0415 05:11:16.730954   14863 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0415 05:11:16.737231   14863 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0415 05:11:16.737316   14863 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0415 05:11:16.737373   14863 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0415 05:11:16.737413   14863 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0415 05:11:16.737485   14863 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0415 05:11:16.737642   14863 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0415 05:11:16.737740   14863 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 05:11:16.737791   14863 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0415 05:11:16.747545   14863 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0415 05:11:16.747601   14863 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0415 05:11:16.747745   14863 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 05:11:16.748225   14863 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0415 05:11:16.748455   14863 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0415 05:11:16.748480   14863 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0415 05:11:16.748468   14863 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0415 05:11:16.748507   14863 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0415 05:11:17.155198   14863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0415 05:11:17.165722   14863 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0415 05:11:17.165747   14863 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0415 05:11:17.165803   14863 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0415 05:11:17.175875   14863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0415 05:11:17.175980   14863 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0415 05:11:17.177568   14863 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0415 05:11:17.177577   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0415 05:11:17.185152   14863 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0415 05:11:17.185164   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0415 05:11:17.196795   14863 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0415 05:11:17.196933   14863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0415 05:11:17.200146   14863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0415 05:11:17.220978   14863 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0415 05:11:17.221147   14863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0415 05:11:17.221167   14863 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0415 05:11:17.221181   14863 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0415 05:11:17.221207   14863 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0415 05:11:17.221839   14863 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0415 05:11:17.221858   14863 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0415 05:11:17.221883   14863 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0415 05:11:17.233772   14863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0415 05:11:17.252191   14863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0415 05:11:17.252315   14863 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0415 05:11:17.252317   14863 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0415 05:11:17.252333   14863 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0415 05:11:17.252370   14863 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0415 05:11:17.257605   14863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0415 05:11:17.269461   14863 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0415 05:11:17.269480   14863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0415 05:11:17.269482   14863 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0415 05:11:17.269461   14863 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0415 05:11:17.269533   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0415 05:11:17.269543   14863 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0415 05:11:17.287953   14863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0415 05:11:17.288086   14863 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0415 05:11:17.290212   14863 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0415 05:11:17.290247   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0415 05:11:17.292080   14863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0415 05:11:17.292696   14863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0415 05:11:17.331459   14863 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0415 05:11:17.331504   14863 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0415 05:11:17.331620   14863 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0415 05:11:17.339372   14863 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0415 05:11:17.339397   14863 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0415 05:11:17.339463   14863 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0415 05:11:17.357542   14863 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0415 05:11:17.357568   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0415 05:11:17.382839   14863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0415 05:11:17.388095   14863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0415 05:11:17.484126   14863 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0415 05:11:17.554179   14863 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0415 05:11:17.554193   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0415 05:11:17.569652   14863 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0415 05:11:17.569773   14863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 05:11:17.705703   14863 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0415 05:11:17.705737   14863 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0415 05:11:17.705758   14863 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 05:11:17.705831   14863 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 05:11:17.719819   14863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0415 05:11:17.719940   14863 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0415 05:11:17.721447   14863 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0415 05:11:17.721461   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0415 05:11:17.749698   14863 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0415 05:11:17.749712   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0415 05:11:17.989166   14863 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0415 05:11:17.989205   14863 cache_images.go:92] duration metric: took 1.258292833s to LoadCachedImages
	W0415 05:11:17.989252   14863 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0415 05:11:17.989257   14863 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0415 05:11:17.989307   14863 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-413000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 05:11:17.989369   14863 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0415 05:11:18.003219   14863 cni.go:84] Creating CNI manager for ""
	I0415 05:11:18.003231   14863 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:11:18.003240   14863 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 05:11:18.003248   14863 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-413000 NodeName:stopped-upgrade-413000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 05:11:18.003318   14863 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-413000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 05:11:18.003380   14863 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0415 05:11:18.006331   14863 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 05:11:18.006367   14863 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0415 05:11:18.009373   14863 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0415 05:11:18.014360   14863 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 05:11:18.019350   14863 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0415 05:11:18.024803   14863 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0415 05:11:18.026123   14863 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 05:11:18.030098   14863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 05:11:18.113806   14863 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 05:11:18.119634   14863 certs.go:68] Setting up /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000 for IP: 10.0.2.15
	I0415 05:11:18.119645   14863 certs.go:194] generating shared ca certs ...
	I0415 05:11:18.119671   14863 certs.go:226] acquiring lock for ca certs: {Name:mka361b510efa9f3b096790ec7d465ec93fc1994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:11:18.120520   14863 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/ca.key
	I0415 05:11:18.120678   14863 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/proxy-client-ca.key
	I0415 05:11:18.120685   14863 certs.go:256] generating profile certs ...
	I0415 05:11:18.120912   14863 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/client.key
	I0415 05:11:18.120928   14863 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.key.19a19657
	I0415 05:11:18.120941   14863 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.crt.19a19657 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0415 05:11:18.158684   14863 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.crt.19a19657 ...
	I0415 05:11:18.158698   14863 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.crt.19a19657: {Name:mk1cb4d2ae939b15cf618866d7fdcae7adbe5314 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:11:18.159622   14863 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.key.19a19657 ...
	I0415 05:11:18.159629   14863 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.key.19a19657: {Name:mkeff21d24c65d3e56ffa3c461d20fd8a7672579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:11:18.159767   14863 certs.go:381] copying /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.crt.19a19657 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.crt
	I0415 05:11:18.159903   14863 certs.go:385] copying /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.key.19a19657 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.key
	I0415 05:11:18.160139   14863 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/proxy-client.key
	I0415 05:11:18.160292   14863 certs.go:484] found cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/12715.pem (1338 bytes)
	W0415 05:11:18.160458   14863 certs.go:480] ignoring /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/12715_empty.pem, impossibly tiny 0 bytes
	I0415 05:11:18.160467   14863 certs.go:484] found cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca-key.pem (1675 bytes)
	I0415 05:11:18.160491   14863 certs.go:484] found cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem (1078 bytes)
	I0415 05:11:18.160511   14863 certs.go:484] found cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem (1123 bytes)
	I0415 05:11:18.160535   14863 certs.go:484] found cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/key.pem (1675 bytes)
	I0415 05:11:18.160581   14863 certs.go:484] found cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/files/etc/ssl/certs/127152.pem (1708 bytes)
	I0415 05:11:18.160917   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 05:11:18.169308   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 05:11:18.176629   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 05:11:18.183642   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 05:11:18.190168   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0415 05:11:18.197133   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0415 05:11:18.203567   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 05:11:18.210108   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0415 05:11:18.216879   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/files/etc/ssl/certs/127152.pem --> /usr/share/ca-certificates/127152.pem (1708 bytes)
	I0415 05:11:18.223018   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 05:11:18.230028   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/12715.pem --> /usr/share/ca-certificates/12715.pem (1338 bytes)
	I0415 05:11:18.237029   14863 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 05:11:18.242399   14863 ssh_runner.go:195] Run: openssl version
	I0415 05:11:18.244388   14863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/127152.pem && ln -fs /usr/share/ca-certificates/127152.pem /etc/ssl/certs/127152.pem"
	I0415 05:11:18.247059   14863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127152.pem
	I0415 05:11:18.248393   14863 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 11:54 /usr/share/ca-certificates/127152.pem
	I0415 05:11:18.248413   14863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127152.pem
	I0415 05:11:18.250056   14863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/127152.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 05:11:18.253392   14863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 05:11:18.256435   14863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 05:11:18.257778   14863 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 12:07 /usr/share/ca-certificates/minikubeCA.pem
	I0415 05:11:18.257795   14863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 05:11:18.259583   14863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 05:11:18.262432   14863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12715.pem && ln -fs /usr/share/ca-certificates/12715.pem /etc/ssl/certs/12715.pem"
	I0415 05:11:18.265636   14863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12715.pem
	I0415 05:11:18.267042   14863 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 11:54 /usr/share/ca-certificates/12715.pem
	I0415 05:11:18.267070   14863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12715.pem
	I0415 05:11:18.268765   14863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12715.pem /etc/ssl/certs/51391683.0"
	I0415 05:11:18.271470   14863 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 05:11:18.272835   14863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0415 05:11:18.274849   14863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0415 05:11:18.276704   14863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0415 05:11:18.278682   14863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0415 05:11:18.280408   14863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0415 05:11:18.282034   14863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0415 05:11:18.283957   14863 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52436 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0415 05:11:18.284034   14863 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 05:11:18.294730   14863 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0415 05:11:18.297983   14863 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0415 05:11:18.297990   14863 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0415 05:11:18.297993   14863 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0415 05:11:18.298026   14863 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0415 05:11:18.301292   14863 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0415 05:11:18.301705   14863 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-413000" does not appear in /Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:11:18.301804   14863 kubeconfig.go:62] /Users/jenkins/minikube-integration/18644-12307/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-413000" cluster setting kubeconfig missing "stopped-upgrade-413000" context setting]
	I0415 05:11:18.302007   14863 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/kubeconfig: {Name:mk14f6faef80a57910c6a54d13a440297d06327c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:11:18.302449   14863 kapi.go:59] client config for stopped-upgrade-413000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/client.key", CAFile:"/Users/jenkins/minikube-integration/18644-12307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103c93c80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 05:11:18.302887   14863 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0415 05:11:18.305965   14863 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-413000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0415 05:11:18.305971   14863 kubeadm.go:1154] stopping kube-system containers ...
	I0415 05:11:18.306022   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 05:11:18.316494   14863 docker.go:483] Stopping containers: [0b788794f1b5 c93d5a242d94 f38c3f6989c2 e499c934f641 27f751bbc55d b26e657bb402 e6cc7832ba20 90ef6e31f982]
	I0415 05:11:18.316570   14863 ssh_runner.go:195] Run: docker stop 0b788794f1b5 c93d5a242d94 f38c3f6989c2 e499c934f641 27f751bbc55d b26e657bb402 e6cc7832ba20 90ef6e31f982
	I0415 05:11:18.327020   14863 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0415 05:11:18.332856   14863 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 05:11:18.335646   14863 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 05:11:18.335656   14863 kubeadm.go:156] found existing configuration files:
	
	I0415 05:11:18.335686   14863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/admin.conf
	I0415 05:11:18.338156   14863 kubeadm.go:162] "https://control-plane.minikube.internal:52436" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 05:11:18.338186   14863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 05:11:18.341225   14863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/kubelet.conf
	I0415 05:11:18.343953   14863 kubeadm.go:162] "https://control-plane.minikube.internal:52436" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 05:11:18.343973   14863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 05:11:18.346360   14863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/controller-manager.conf
	I0415 05:11:18.349411   14863 kubeadm.go:162] "https://control-plane.minikube.internal:52436" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 05:11:18.349443   14863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 05:11:18.352607   14863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/scheduler.conf
	I0415 05:11:18.355135   14863 kubeadm.go:162] "https://control-plane.minikube.internal:52436" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 05:11:18.355165   14863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 05:11:18.358158   14863 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 05:11:18.361984   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0415 05:11:18.385456   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0415 05:11:18.886509   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0415 05:11:19.011053   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0415 05:11:19.037622   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0415 05:11:19.066771   14863 api_server.go:52] waiting for apiserver process to appear ...
	I0415 05:11:19.066860   14863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 05:11:19.568985   14863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 05:11:20.068869   14863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 05:11:20.072987   14863 api_server.go:72] duration metric: took 1.006260083s to wait for apiserver process to appear ...
	I0415 05:11:20.072998   14863 api_server.go:88] waiting for apiserver healthz status ...
	I0415 05:11:20.073006   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:20.377746   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:25.073910   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:11:25.073934   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:25.379798   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:11:25.379916   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:11:25.391108   14706 logs.go:276] 2 containers: [34bb719097d2 01884ef518c9]
	I0415 05:11:25.391191   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:11:25.404714   14706 logs.go:276] 2 containers: [def9914ead4d 6ff890f8afe9]
	I0415 05:11:25.404796   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:11:25.415105   14706 logs.go:276] 1 containers: [04ae38b68739]
	I0415 05:11:25.415186   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:11:25.425597   14706 logs.go:276] 2 containers: [24b9766e04bb ca69570f88f3]
	I0415 05:11:25.425673   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:11:25.435972   14706 logs.go:276] 1 containers: [ee30cb75f716]
	I0415 05:11:25.436052   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:11:25.448185   14706 logs.go:276] 2 containers: [44cd3ac0aba4 2231e4ededcd]
	I0415 05:11:25.448262   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:11:25.458926   14706 logs.go:276] 0 containers: []
	W0415 05:11:25.458938   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:11:25.459008   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:11:25.474728   14706 logs.go:276] 1 containers: [4a3f45afbdb1]
	I0415 05:11:25.474744   14706 logs.go:123] Gathering logs for etcd [def9914ead4d] ...
	I0415 05:11:25.474749   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def9914ead4d"
	I0415 05:11:25.489105   14706 logs.go:123] Gathering logs for etcd [6ff890f8afe9] ...
	I0415 05:11:25.489118   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff890f8afe9"
	I0415 05:11:25.506845   14706 logs.go:123] Gathering logs for storage-provisioner [4a3f45afbdb1] ...
	I0415 05:11:25.506857   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a3f45afbdb1"
	I0415 05:11:25.518151   14706 logs.go:123] Gathering logs for kube-apiserver [34bb719097d2] ...
	I0415 05:11:25.518162   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34bb719097d2"
	I0415 05:11:25.532038   14706 logs.go:123] Gathering logs for kube-controller-manager [44cd3ac0aba4] ...
	I0415 05:11:25.532048   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44cd3ac0aba4"
	I0415 05:11:25.549070   14706 logs.go:123] Gathering logs for kube-proxy [ee30cb75f716] ...
	I0415 05:11:25.549081   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee30cb75f716"
	I0415 05:11:25.561082   14706 logs.go:123] Gathering logs for kube-controller-manager [2231e4ededcd] ...
	I0415 05:11:25.561092   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2231e4ededcd"
	I0415 05:11:25.572676   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:11:25.572686   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:11:25.595265   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:11:25.595274   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:11:25.630436   14706 logs.go:123] Gathering logs for coredns [04ae38b68739] ...
	I0415 05:11:25.630451   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04ae38b68739"
	I0415 05:11:25.644805   14706 logs.go:123] Gathering logs for kube-apiserver [01884ef518c9] ...
	I0415 05:11:25.644816   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01884ef518c9"
	I0415 05:11:25.657374   14706 logs.go:123] Gathering logs for kube-scheduler [24b9766e04bb] ...
	I0415 05:11:25.657385   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b9766e04bb"
	I0415 05:11:25.669456   14706 logs.go:123] Gathering logs for kube-scheduler [ca69570f88f3] ...
	I0415 05:11:25.669465   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca69570f88f3"
	I0415 05:11:25.684630   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:11:25.684643   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:11:25.700242   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:11:25.700253   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:11:25.738571   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:11:25.738678   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:11:25.739168   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:11:25.739175   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:11:25.743551   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:11:25.743560   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:11:25.743583   14706 out.go:239] X Problems detected in kubelet:
	W0415 05:11:25.743589   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:11:25.743592   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:11:25.743598   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:11:25.743600   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:11:30.074704   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:11:30.074742   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:35.074880   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:11:35.074959   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:35.746112   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:40.075276   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:11:40.075329   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:40.748453   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:11:40.748686   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:11:40.779292   14706 logs.go:276] 2 containers: [34bb719097d2 01884ef518c9]
	I0415 05:11:40.779437   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:11:40.797832   14706 logs.go:276] 2 containers: [def9914ead4d 6ff890f8afe9]
	I0415 05:11:40.797921   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:11:40.816256   14706 logs.go:276] 1 containers: [04ae38b68739]
	I0415 05:11:40.816342   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:11:40.827706   14706 logs.go:276] 2 containers: [24b9766e04bb ca69570f88f3]
	I0415 05:11:40.827783   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:11:40.842232   14706 logs.go:276] 1 containers: [ee30cb75f716]
	I0415 05:11:40.842314   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:11:40.852692   14706 logs.go:276] 2 containers: [44cd3ac0aba4 2231e4ededcd]
	I0415 05:11:40.852766   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:11:40.863221   14706 logs.go:276] 0 containers: []
	W0415 05:11:40.863234   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:11:40.863301   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:11:40.873950   14706 logs.go:276] 1 containers: [4a3f45afbdb1]
	I0415 05:11:40.873966   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:11:40.873972   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:11:40.886461   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:11:40.886473   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:11:40.890716   14706 logs.go:123] Gathering logs for coredns [04ae38b68739] ...
	I0415 05:11:40.890723   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04ae38b68739"
	I0415 05:11:40.901635   14706 logs.go:123] Gathering logs for kube-controller-manager [44cd3ac0aba4] ...
	I0415 05:11:40.901648   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44cd3ac0aba4"
	I0415 05:11:40.921670   14706 logs.go:123] Gathering logs for kube-controller-manager [2231e4ededcd] ...
	I0415 05:11:40.921681   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2231e4ededcd"
	I0415 05:11:40.933444   14706 logs.go:123] Gathering logs for kube-apiserver [01884ef518c9] ...
	I0415 05:11:40.933456   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01884ef518c9"
	I0415 05:11:40.945671   14706 logs.go:123] Gathering logs for etcd [def9914ead4d] ...
	I0415 05:11:40.945684   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def9914ead4d"
	I0415 05:11:40.959366   14706 logs.go:123] Gathering logs for kube-proxy [ee30cb75f716] ...
	I0415 05:11:40.959374   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee30cb75f716"
	I0415 05:11:40.971165   14706 logs.go:123] Gathering logs for kube-scheduler [ca69570f88f3] ...
	I0415 05:11:40.971174   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca69570f88f3"
	I0415 05:11:40.988058   14706 logs.go:123] Gathering logs for storage-provisioner [4a3f45afbdb1] ...
	I0415 05:11:40.988067   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a3f45afbdb1"
	I0415 05:11:40.999834   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:11:40.999844   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:11:41.036357   14706 logs.go:123] Gathering logs for kube-apiserver [34bb719097d2] ...
	I0415 05:11:41.036369   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34bb719097d2"
	I0415 05:11:41.050259   14706 logs.go:123] Gathering logs for etcd [6ff890f8afe9] ...
	I0415 05:11:41.050271   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff890f8afe9"
	I0415 05:11:41.079712   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:11:41.079722   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:11:41.117653   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:11:41.117748   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:11:41.118229   14706 logs.go:123] Gathering logs for kube-scheduler [24b9766e04bb] ...
	I0415 05:11:41.118236   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24b9766e04bb"
	I0415 05:11:41.134468   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:11:41.134479   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:11:41.157245   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:11:41.157256   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:11:41.157280   14706 out.go:239] X Problems detected in kubelet:
	W0415 05:11:41.157284   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:11:41.157289   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:11:41.157305   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:11:41.157311   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:11:45.075782   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:11:45.075812   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:50.076368   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:11:50.076505   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:51.161056   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:55.077677   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:11:55.077749   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:56.163243   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:11:56.163382   14706 kubeadm.go:591] duration metric: took 4m7.815445041s to restartPrimaryControlPlane
	W0415 05:11:56.163461   14706 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0415 05:11:56.163502   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0415 05:11:57.143745   14706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 05:11:57.148977   14706 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 05:11:57.152114   14706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 05:11:57.154893   14706 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 05:11:57.154900   14706 kubeadm.go:156] found existing configuration files:
	
	I0415 05:11:57.154924   14706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/admin.conf
	I0415 05:11:57.157238   14706 kubeadm.go:162] "https://control-plane.minikube.internal:52246" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 05:11:57.157271   14706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 05:11:57.160288   14706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/kubelet.conf
	I0415 05:11:57.163394   14706 kubeadm.go:162] "https://control-plane.minikube.internal:52246" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 05:11:57.163424   14706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 05:11:57.166275   14706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/controller-manager.conf
	I0415 05:11:57.168683   14706 kubeadm.go:162] "https://control-plane.minikube.internal:52246" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 05:11:57.168713   14706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 05:11:57.171857   14706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/scheduler.conf
	I0415 05:11:57.174776   14706 kubeadm.go:162] "https://control-plane.minikube.internal:52246" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52246 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 05:11:57.174809   14706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 05:11:57.177242   14706 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0415 05:11:57.193815   14706 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0415 05:11:57.193850   14706 kubeadm.go:309] [preflight] Running pre-flight checks
	I0415 05:11:57.240505   14706 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 05:11:57.240598   14706 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 05:11:57.240647   14706 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 05:11:57.290154   14706 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 05:11:57.296176   14706 out.go:204]   - Generating certificates and keys ...
	I0415 05:11:57.296211   14706 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0415 05:11:57.296241   14706 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0415 05:11:57.296285   14706 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0415 05:11:57.296317   14706 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0415 05:11:57.296359   14706 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0415 05:11:57.296392   14706 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0415 05:11:57.296425   14706 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0415 05:11:57.296460   14706 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0415 05:11:57.296501   14706 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0415 05:11:57.296545   14706 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0415 05:11:57.296565   14706 kubeadm.go:309] [certs] Using the existing "sa" key
	I0415 05:11:57.296597   14706 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 05:11:57.504719   14706 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 05:11:57.607923   14706 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 05:11:57.726682   14706 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 05:11:57.779052   14706 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 05:11:57.812915   14706 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 05:11:57.813259   14706 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 05:11:57.813314   14706 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0415 05:11:57.884399   14706 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 05:12:00.079075   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:00.079100   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:57.889602   14706 out.go:204]   - Booting up control plane ...
	I0415 05:11:57.889654   14706 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 05:11:57.889720   14706 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 05:11:57.889759   14706 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 05:11:57.889804   14706 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 05:11:57.890545   14706 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0415 05:12:02.394158   14706 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.503538 seconds
	I0415 05:12:02.394253   14706 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 05:12:02.399723   14706 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 05:12:02.911593   14706 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0415 05:12:02.911807   14706 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-921000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 05:12:03.417417   14706 kubeadm.go:309] [bootstrap-token] Using token: 4n0n1e.m5bb0672cxxkia65
	I0415 05:12:03.422944   14706 out.go:204]   - Configuring RBAC rules ...
	I0415 05:12:03.423008   14706 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 05:12:03.423068   14706 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 05:12:03.428341   14706 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 05:12:03.429548   14706 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 05:12:03.430433   14706 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 05:12:03.431361   14706 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 05:12:03.434631   14706 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 05:12:03.611312   14706 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0415 05:12:03.821806   14706 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0415 05:12:03.822285   14706 kubeadm.go:309] 
	I0415 05:12:03.822317   14706 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0415 05:12:03.822321   14706 kubeadm.go:309] 
	I0415 05:12:03.822357   14706 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0415 05:12:03.822360   14706 kubeadm.go:309] 
	I0415 05:12:03.822374   14706 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0415 05:12:03.822411   14706 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 05:12:03.822465   14706 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 05:12:03.822470   14706 kubeadm.go:309] 
	I0415 05:12:03.822498   14706 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0415 05:12:03.822502   14706 kubeadm.go:309] 
	I0415 05:12:03.822533   14706 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 05:12:03.822538   14706 kubeadm.go:309] 
	I0415 05:12:03.822565   14706 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0415 05:12:03.822609   14706 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 05:12:03.822644   14706 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 05:12:03.822647   14706 kubeadm.go:309] 
	I0415 05:12:03.822693   14706 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0415 05:12:03.822741   14706 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0415 05:12:03.822743   14706 kubeadm.go:309] 
	I0415 05:12:03.822782   14706 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4n0n1e.m5bb0672cxxkia65 \
	I0415 05:12:03.822836   14706 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bccfbc6802506c01fbf9bb31ef37a61b86ce5e1e10fb5fd68834dd4a5a1b8043 \
	I0415 05:12:03.822848   14706 kubeadm.go:309] 	--control-plane 
	I0415 05:12:03.822850   14706 kubeadm.go:309] 
	I0415 05:12:03.822890   14706 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0415 05:12:03.822902   14706 kubeadm.go:309] 
	I0415 05:12:03.822940   14706 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4n0n1e.m5bb0672cxxkia65 \
	I0415 05:12:03.823011   14706 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bccfbc6802506c01fbf9bb31ef37a61b86ce5e1e10fb5fd68834dd4a5a1b8043 
	I0415 05:12:03.823067   14706 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 05:12:03.823073   14706 cni.go:84] Creating CNI manager for ""
	I0415 05:12:03.823080   14706 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:12:03.827583   14706 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0415 05:12:03.832563   14706 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0415 05:12:03.835739   14706 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0415 05:12:03.842926   14706 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0415 05:12:03.842997   14706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-921000 minikube.k8s.io/updated_at=2024_04_15T05_12_03_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=fd91a0c5dbcf69c10661a6c45f66c039ce7b5f02 minikube.k8s.io/name=running-upgrade-921000 minikube.k8s.io/primary=true
	I0415 05:12:03.842997   14706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 05:12:03.878280   14706 kubeadm.go:1107] duration metric: took 35.342375ms to wait for elevateKubeSystemPrivileges
	I0415 05:12:03.892625   14706 ops.go:34] apiserver oom_adj: -16
	W0415 05:12:03.892649   14706 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0415 05:12:03.892654   14706 kubeadm.go:393] duration metric: took 4m15.561891875s to StartCluster
	I0415 05:12:03.892664   14706 settings.go:142] acquiring lock: {Name:mkd1677925079a7213680748c8d1dd3d4ca25e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:12:03.893081   14706 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:12:03.893498   14706 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/kubeconfig: {Name:mk14f6faef80a57910c6a54d13a440297d06327c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:12:03.893836   14706 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:12:03.897393   14706 out.go:177] * Verifying Kubernetes components...
	I0415 05:12:03.893860   14706 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0415 05:12:03.894020   14706 config.go:182] Loaded profile config "running-upgrade-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:12:03.903584   14706 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-921000"
	I0415 05:12:03.903589   14706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 05:12:03.903602   14706 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-921000"
	W0415 05:12:03.903607   14706 addons.go:243] addon storage-provisioner should already be in state true
	I0415 05:12:03.903617   14706 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-921000"
	I0415 05:12:03.903621   14706 host.go:66] Checking if "running-upgrade-921000" exists ...
	I0415 05:12:03.903630   14706 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-921000"
	I0415 05:12:03.904811   14706 kapi.go:59] client config for running-upgrade-921000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/running-upgrade-921000/client.key", CAFile:"/Users/jenkins/minikube-integration/18644-12307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10216bc80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 05:12:03.905120   14706 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-921000"
	W0415 05:12:03.905125   14706 addons.go:243] addon default-storageclass should already be in state true
	I0415 05:12:03.905133   14706 host.go:66] Checking if "running-upgrade-921000" exists ...
	I0415 05:12:03.909507   14706 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 05:12:05.080583   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:05.080652   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:03.912573   14706 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 05:12:03.912579   14706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 05:12:03.912585   14706 sshutil.go:53] new ssh client: &{IP:localhost Port:52214 SSHKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/running-upgrade-921000/id_rsa Username:docker}
	I0415 05:12:03.913247   14706 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 05:12:03.913253   14706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 05:12:03.913256   14706 sshutil.go:53] new ssh client: &{IP:localhost Port:52214 SSHKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/running-upgrade-921000/id_rsa Username:docker}
	I0415 05:12:03.980072   14706 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 05:12:03.986308   14706 api_server.go:52] waiting for apiserver process to appear ...
	I0415 05:12:03.986360   14706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 05:12:03.990299   14706 api_server.go:72] duration metric: took 96.454792ms to wait for apiserver process to appear ...
	I0415 05:12:03.990311   14706 api_server.go:88] waiting for apiserver healthz status ...
	I0415 05:12:03.990318   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:04.022164   14706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 05:12:04.022274   14706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 05:12:10.082638   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:10.082678   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:08.992238   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:08.992287   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:15.084698   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:15.084720   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:13.992444   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:13.992486   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:20.086731   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:20.086849   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:12:20.098618   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:12:20.098714   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:12:20.109744   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:12:20.109843   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:12:20.120258   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:12:20.120340   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:12:20.130834   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:12:20.130918   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:12:20.140931   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:12:20.141013   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:12:20.151445   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:12:20.151519   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:12:18.992620   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:18.992646   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:20.161695   14863 logs.go:276] 0 containers: []
	W0415 05:12:20.162400   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:12:20.162472   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:12:20.172210   14863 logs.go:276] 0 containers: []
	W0415 05:12:20.172222   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:12:20.172230   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:12:20.172234   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:12:20.183813   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:12:20.183823   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:12:20.207677   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:12:20.207687   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:12:20.219285   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:12:20.219294   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:12:20.322581   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:12:20.322593   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:12:20.336537   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:12:20.336548   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:12:20.353949   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:12:20.353960   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:12:20.390947   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:12:20.390957   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:12:20.406423   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:12:20.406434   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:12:20.422631   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:12:20.422641   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:12:20.436057   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:12:20.436069   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:12:20.453033   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:12:20.453044   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:12:20.475371   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:12:20.475382   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:12:20.479642   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:12:20.479649   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:12:20.496626   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:12:20.496639   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:12:23.027530   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:23.992898   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:23.992930   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:28.029531   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:28.029648   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:12:28.041144   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:12:28.041238   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:12:28.052431   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:12:28.052513   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:12:28.063055   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:12:28.063132   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:12:28.077453   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:12:28.077543   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:12:28.087891   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:12:28.087979   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:12:28.102784   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:12:28.102863   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:12:28.113158   14863 logs.go:276] 0 containers: []
	W0415 05:12:28.113172   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:12:28.113251   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:12:28.123652   14863 logs.go:276] 0 containers: []
	W0415 05:12:28.123669   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:12:28.123680   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:12:28.123686   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:12:28.128014   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:12:28.128028   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:12:28.166215   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:12:28.166225   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:12:28.190689   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:12:28.190698   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:12:28.227494   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:12:28.227503   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:12:28.249524   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:12:28.249533   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:12:28.282732   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:12:28.282743   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:12:28.297643   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:12:28.297654   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:12:28.313318   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:12:28.313328   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:12:28.327583   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:12:28.327595   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:12:28.343989   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:12:28.344000   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:12:28.361379   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:12:28.361388   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:12:28.374717   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:12:28.374727   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:12:28.385832   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:12:28.385845   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:12:28.404962   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:12:28.404972   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:12:28.993322   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:28.993350   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:33.993907   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:33.993964   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0415 05:12:34.433411   14706 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0415 05:12:34.441171   14706 out.go:177] * Enabled addons: storage-provisioner
	I0415 05:12:30.918976   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:34.455823   14706 addons.go:505] duration metric: took 30.56324225s for enable addons: enabled=[storage-provisioner]
	I0415 05:12:35.921306   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:35.921715   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:12:35.959757   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:12:35.959924   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:12:35.977961   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:12:35.978062   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:12:35.992247   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:12:35.992342   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:12:36.005165   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:12:36.005243   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:12:36.015791   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:12:36.015865   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:12:36.026912   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:12:36.026997   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:12:36.036679   14863 logs.go:276] 0 containers: []
	W0415 05:12:36.036690   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:12:36.036762   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:12:36.046909   14863 logs.go:276] 0 containers: []
	W0415 05:12:36.046920   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:12:36.046928   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:12:36.046933   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:12:36.087078   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:12:36.087091   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:12:36.101139   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:12:36.101151   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:12:36.105263   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:12:36.105270   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:12:36.116464   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:12:36.116475   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:12:36.128255   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:12:36.128268   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:12:36.148689   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:12:36.148702   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:12:36.174164   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:12:36.174184   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:12:36.212167   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:12:36.212178   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:12:36.239924   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:12:36.239935   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:12:36.254469   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:12:36.254483   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:12:36.265879   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:12:36.265889   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:12:36.283396   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:12:36.283405   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:12:36.297191   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:12:36.297207   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:12:36.312615   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:12:36.312626   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:12:38.831861   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:38.994730   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:38.994804   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:43.834141   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:43.834278   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:12:43.846270   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:12:43.846350   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:12:43.857047   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:12:43.857140   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:12:43.867483   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:12:43.867568   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:12:43.877802   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:12:43.877874   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:12:43.888052   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:12:43.888133   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:12:43.898596   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:12:43.898676   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:12:43.912096   14863 logs.go:276] 0 containers: []
	W0415 05:12:43.912107   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:12:43.912172   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:12:43.926570   14863 logs.go:276] 0 containers: []
	W0415 05:12:43.926582   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:12:43.926590   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:12:43.926596   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:12:43.962480   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:12:43.962491   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:12:43.976716   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:12:43.976728   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:12:44.016310   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:12:44.016319   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:12:44.020472   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:12:44.020478   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:12:44.031813   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:12:44.031824   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:12:44.044027   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:12:44.044036   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:12:44.067798   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:12:44.067809   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:12:44.079533   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:12:44.079544   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:12:44.096138   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:12:44.096149   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:12:44.116380   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:12:44.116391   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:12:44.145466   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:12:44.145476   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:12:44.159846   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:12:44.159860   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:12:44.172248   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:12:44.172260   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:12:44.191109   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:12:44.191121   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:12:43.996059   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:43.996079   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:46.712293   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:48.996182   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:48.996266   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:51.713565   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:51.713881   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:12:51.740810   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:12:51.740962   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:12:51.758276   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:12:51.758366   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:12:51.771934   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:12:51.772022   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:12:51.783784   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:12:51.783869   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:12:51.798675   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:12:51.798759   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:12:51.814559   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:12:51.814653   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:12:51.824711   14863 logs.go:276] 0 containers: []
	W0415 05:12:51.824725   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:12:51.824793   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:12:51.835108   14863 logs.go:276] 0 containers: []
	W0415 05:12:51.835119   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:12:51.835126   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:12:51.835132   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:12:51.839078   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:12:51.839084   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:12:51.863855   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:12:51.863867   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:12:51.878851   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:12:51.878860   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:12:51.901371   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:12:51.901380   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:12:51.940950   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:12:51.940964   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:12:51.957126   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:12:51.957139   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:12:51.970717   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:12:51.970728   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:12:51.995488   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:12:51.995498   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:12:52.033297   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:12:52.033312   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:12:52.051809   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:12:52.051820   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:12:52.066429   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:12:52.066439   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:12:52.081273   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:12:52.081285   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:12:52.092578   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:12:52.092591   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:12:52.112305   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:12:52.112316   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:12:54.626202   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:53.998352   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:53.998389   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:59.628466   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:59.628731   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:12:59.659897   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:12:59.660033   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:12:59.675315   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:12:59.675414   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:12:59.687666   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:12:59.687746   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:12:59.698145   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:12:59.698232   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:12:59.708981   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:12:59.709055   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:12:59.719707   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:12:59.719796   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:12:59.730144   14863 logs.go:276] 0 containers: []
	W0415 05:12:59.730155   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:12:59.730238   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:12:59.743933   14863 logs.go:276] 0 containers: []
	W0415 05:12:59.743946   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:12:59.743954   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:12:59.743960   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:12:59.748382   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:12:59.748389   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:12:59.765924   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:12:59.765934   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:12:59.790383   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:12:59.790400   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:12:59.804538   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:12:59.804555   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:12:59.816110   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:12:59.816122   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:12:59.834637   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:12:59.834647   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:12:59.846654   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:12:59.846665   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:12:59.882532   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:12:59.882543   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:12:59.896977   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:12:59.896987   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:12:59.908634   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:12:59.908645   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:12:59.920798   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:12:59.920813   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:12:59.958774   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:12:59.958785   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:12:59.976783   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:12:59.976794   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:12:59.996397   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:12:59.996409   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:12:59.000409   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:59.000432   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:13:02.522029   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:13:04.002403   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:13:04.002517   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:13:04.014215   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:13:04.014325   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:13:04.030381   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:13:04.030469   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:13:04.046516   14706 logs.go:276] 2 containers: [cade86587dff b1165f66174c]
	I0415 05:13:04.046600   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:13:04.061344   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:13:04.061428   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:13:04.071985   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:13:04.072072   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:13:04.082455   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:13:04.082535   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:13:04.092981   14706 logs.go:276] 0 containers: []
	W0415 05:13:04.092995   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:13:04.093063   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:13:04.104754   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:13:04.104769   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:13:04.104774   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:13:04.122440   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:13:04.122452   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:13:04.127139   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:13:04.127145   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:13:04.141052   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:13:04.141063   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:13:04.152604   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:13:04.152618   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:13:04.164656   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:13:04.164667   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:13:04.175964   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:13:04.175976   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:04.199117   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:13:04.199126   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:13:04.210038   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:13:04.210050   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:13:04.228774   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:13:04.228869   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:13:04.247768   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:13:04.247779   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:13:04.285436   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:13:04.285447   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:13:04.299501   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:13:04.299514   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:13:04.314938   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:13:04.314949   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:13:04.326399   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:13:04.326409   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:13:04.326433   14706 out.go:239] X Problems detected in kubelet:
	W0415 05:13:04.326438   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:13:04.326442   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:13:04.326447   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:13:04.326450   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:13:07.524180   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:13:07.524390   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:13:07.540574   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:13:07.540689   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:13:07.553308   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:13:07.553397   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:13:07.571756   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:13:07.571838   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:13:07.582348   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:13:07.582434   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:13:07.594405   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:13:07.594479   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:13:07.608068   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:13:07.608147   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:13:07.618220   14863 logs.go:276] 0 containers: []
	W0415 05:13:07.618231   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:13:07.618300   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:13:07.629912   14863 logs.go:276] 0 containers: []
	W0415 05:13:07.629926   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:13:07.629937   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:13:07.629947   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:13:07.646495   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:13:07.646505   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:13:07.680399   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:13:07.680411   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:13:07.692460   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:13:07.692471   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:13:07.713512   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:13:07.713521   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:13:07.725044   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:13:07.725055   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:13:07.763874   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:13:07.763885   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:13:07.789875   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:13:07.789885   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:13:07.801423   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:13:07.801435   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:13:07.816356   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:13:07.816367   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:07.840746   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:13:07.840760   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:13:07.846322   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:13:07.846332   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:13:07.860269   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:13:07.860280   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:13:07.874282   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:13:07.874293   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:13:07.886038   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:13:07.886048   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:13:10.406689   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:13:14.330032   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:13:15.407676   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:13:15.407815   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:13:15.426992   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:13:15.427096   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:13:15.441572   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:13:15.441662   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:13:15.454173   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:13:15.454256   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:13:15.464726   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:13:15.464798   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:13:15.478818   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:13:15.478895   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:13:15.489008   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:13:15.489081   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:13:15.499430   14863 logs.go:276] 0 containers: []
	W0415 05:13:15.499441   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:13:15.499501   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:13:15.509316   14863 logs.go:276] 0 containers: []
	W0415 05:13:15.509329   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:13:15.509337   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:13:15.509342   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:13:15.513586   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:13:15.513593   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:13:15.528356   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:13:15.528367   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:13:15.546274   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:13:15.546284   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:15.570452   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:13:15.570459   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:13:15.603436   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:13:15.603447   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:13:15.618304   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:13:15.618315   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:13:15.655355   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:13:15.655366   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:13:15.669031   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:13:15.669042   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:13:15.694518   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:13:15.694529   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:13:15.706048   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:13:15.706060   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:13:15.717799   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:13:15.717818   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:13:15.737028   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:13:15.737039   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:13:15.749568   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:13:15.749578   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:13:15.764157   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:13:15.764167   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:13:18.278397   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:13:19.332054   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:13:19.332200   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:13:19.343162   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:13:19.343240   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:13:19.361979   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:13:19.362065   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:13:19.372485   14706 logs.go:276] 2 containers: [cade86587dff b1165f66174c]
	I0415 05:13:19.372564   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:13:19.382533   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:13:19.382603   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:13:19.393026   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:13:19.393102   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:13:19.403898   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:13:19.403983   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:13:19.413790   14706 logs.go:276] 0 containers: []
	W0415 05:13:19.413807   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:13:19.413873   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:13:19.423904   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:13:19.423920   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:13:19.423926   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:13:19.428222   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:13:19.428229   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:13:19.462997   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:13:19.463007   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:13:19.474559   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:13:19.474570   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:13:19.489674   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:13:19.489683   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:13:19.501399   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:13:19.501411   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:13:19.512958   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:13:19.512968   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:19.536046   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:13:19.536057   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:13:19.553607   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:13:19.553699   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:13:19.572156   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:13:19.572164   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:13:19.586097   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:13:19.586107   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:13:19.599623   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:13:19.599633   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:13:19.610757   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:13:19.610768   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:13:19.628064   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:13:19.628074   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:13:19.641946   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:13:19.641955   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:13:19.641982   14706 out.go:239] X Problems detected in kubelet:
	W0415 05:13:19.641987   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:13:19.641991   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:13:19.641996   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:13:19.641999   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:13:23.280510   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:13:23.280812   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:13:23.307798   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:13:23.307947   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:13:23.325638   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:13:23.325746   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:13:23.340693   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:13:23.340783   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:13:23.351965   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:13:23.352056   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:13:23.362645   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:13:23.362725   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:13:23.373914   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:13:23.373993   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:13:23.384391   14863 logs.go:276] 0 containers: []
	W0415 05:13:23.384405   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:13:23.384472   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:13:23.394396   14863 logs.go:276] 0 containers: []
	W0415 05:13:23.394412   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:13:23.394419   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:13:23.394424   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:13:23.406476   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:13:23.406487   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:23.430418   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:13:23.430431   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:13:23.442939   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:13:23.442954   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:13:23.447352   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:13:23.447357   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:13:23.480754   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:13:23.480766   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:13:23.492544   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:13:23.492559   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:13:23.504197   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:13:23.504207   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:13:23.519875   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:13:23.519888   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:13:23.537444   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:13:23.537453   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:13:23.557616   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:13:23.557625   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:13:23.586483   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:13:23.586494   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:13:23.603647   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:13:23.603657   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:13:23.640157   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:13:23.640207   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:13:23.655069   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:13:23.655080   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:13:26.176684   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:13:29.645782   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:13:31.178912   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:13:31.179081   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:13:31.199374   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:13:31.199471   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:13:31.214437   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:13:31.214531   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:13:31.226816   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:13:31.226895   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:13:31.238492   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:13:31.238575   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:13:31.249596   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:13:31.249679   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:13:31.261282   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:13:31.261365   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:13:31.272175   14863 logs.go:276] 0 containers: []
	W0415 05:13:31.272189   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:13:31.272259   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:13:31.282965   14863 logs.go:276] 0 containers: []
	W0415 05:13:31.282977   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:13:31.282985   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:13:31.282992   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:13:31.318834   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:13:31.318846   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:13:31.336952   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:13:31.336962   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:13:31.349226   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:13:31.349239   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:13:31.373056   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:13:31.373065   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:13:31.397556   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:13:31.397567   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:13:31.413751   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:13:31.413762   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:13:31.432010   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:13:31.432022   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:13:31.436287   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:13:31.436295   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:13:31.448051   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:13:31.448063   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:13:31.460160   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:13:31.460171   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:13:31.471759   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:13:31.471770   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:31.494503   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:13:31.494512   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:13:31.531633   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:13:31.531643   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:13:31.545331   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:13:31.545341   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:13:34.060408   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:13:34.647962   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:13:34.648141   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:13:34.666164   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:13:34.666270   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:13:34.680166   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:13:34.680257   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:13:34.692007   14706 logs.go:276] 2 containers: [cade86587dff b1165f66174c]
	I0415 05:13:34.692086   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:13:34.702275   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:13:34.702345   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:13:34.712716   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:13:34.712787   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:13:34.723387   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:13:34.723467   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:13:34.733714   14706 logs.go:276] 0 containers: []
	W0415 05:13:34.733727   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:13:34.733798   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:13:34.744526   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:13:34.744545   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:13:34.744550   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:13:34.758432   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:13:34.758443   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:13:34.770065   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:13:34.770076   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:13:34.787323   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:13:34.787333   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:34.811043   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:13:34.811053   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:13:34.822873   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:13:34.822884   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:13:34.840229   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:13:34.840322   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:13:34.859171   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:13:34.859178   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:13:34.863359   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:13:34.863366   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:13:34.902584   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:13:34.902594   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:13:34.918218   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:13:34.918228   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:13:34.930484   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:13:34.930494   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:13:34.944951   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:13:34.944963   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:13:34.956143   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:13:34.956154   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:13:34.967654   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:13:34.967665   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:13:34.967689   14706 out.go:239] X Problems detected in kubelet:
	W0415 05:13:34.967694   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:13:34.967697   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:13:34.967701   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:13:34.967727   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:13:39.062442   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:13:39.062626   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:13:39.079743   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:13:39.079832   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:13:39.096478   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:13:39.096565   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:13:39.115150   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:13:39.115235   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:13:39.126331   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:13:39.126415   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:13:39.136978   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:13:39.137063   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:13:39.147219   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:13:39.147299   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:13:39.157784   14863 logs.go:276] 0 containers: []
	W0415 05:13:39.157797   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:13:39.157858   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:13:39.171783   14863 logs.go:276] 0 containers: []
	W0415 05:13:39.171794   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:13:39.171802   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:13:39.171807   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:13:39.185310   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:13:39.185320   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:13:39.202732   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:13:39.202743   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:39.226428   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:13:39.226440   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:13:39.264628   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:13:39.264644   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:13:39.290018   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:13:39.290031   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:13:39.304127   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:13:39.304138   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:13:39.322834   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:13:39.322844   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:13:39.359739   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:13:39.359749   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:13:39.373634   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:13:39.373644   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:13:39.385222   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:13:39.385232   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:13:39.400031   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:13:39.400042   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:13:39.404011   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:13:39.404017   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:13:39.418013   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:13:39.418024   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:13:39.430009   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:13:39.430022   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:13:41.944310   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:13:44.970119   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:13:46.946531   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:13:46.946919   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:13:46.988054   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:13:46.988220   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:13:47.010650   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:13:47.010760   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:13:47.025699   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:13:47.025792   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:13:47.038821   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:13:47.038907   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:13:47.049568   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:13:47.049654   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:13:47.059661   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:13:47.059733   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:13:47.069529   14863 logs.go:276] 0 containers: []
	W0415 05:13:47.069541   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:13:47.069608   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:13:47.080270   14863 logs.go:276] 0 containers: []
	W0415 05:13:47.080281   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:13:47.080288   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:13:47.080293   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:13:47.084494   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:13:47.084500   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:13:47.121548   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:13:47.121559   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:13:47.147430   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:13:47.147442   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:47.171077   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:13:47.171086   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:13:47.182273   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:13:47.182285   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:13:47.193746   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:13:47.193757   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:13:47.215471   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:13:47.215481   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:13:47.253610   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:13:47.253619   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:13:47.267933   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:13:47.267943   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:13:47.282186   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:13:47.282198   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:13:47.293428   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:13:47.293440   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:13:47.305083   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:13:47.305095   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:13:47.320040   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:13:47.320049   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:13:47.337140   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:13:47.337151   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:13:49.856899   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:13:49.972635   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:13:49.972861   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:13:50.002500   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:13:50.002645   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:13:50.020648   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:13:50.020759   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:13:50.036030   14706 logs.go:276] 2 containers: [cade86587dff b1165f66174c]
	I0415 05:13:50.036120   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:13:50.048375   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:13:50.048448   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:13:50.059033   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:13:50.059113   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:13:50.069686   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:13:50.069759   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:13:50.080177   14706 logs.go:276] 0 containers: []
	W0415 05:13:50.080190   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:13:50.080261   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:13:50.091069   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:13:50.091083   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:13:50.091088   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:50.115916   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:13:50.115928   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:13:50.127961   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:13:50.127970   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:13:50.132823   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:13:50.132829   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:13:50.144457   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:13:50.144468   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:13:50.156208   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:13:50.156217   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:13:50.174155   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:13:50.174165   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:13:50.185743   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:13:50.185754   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:13:50.197141   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:13:50.197151   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:13:50.215515   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:13:50.215608   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:13:50.233875   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:13:50.233882   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:13:50.269086   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:13:50.269097   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:13:50.283932   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:13:50.283942   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:13:50.297466   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:13:50.297476   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:13:50.313240   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:13:50.313249   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:13:50.313274   14706 out.go:239] X Problems detected in kubelet:
	W0415 05:13:50.313281   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:13:50.313285   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:13:50.313290   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:13:50.313299   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:13:54.859155   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:13:54.859577   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:13:54.898568   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:13:54.898730   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:13:54.919978   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:13:54.920079   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:13:54.935651   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:13:54.935752   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:13:54.948350   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:13:54.948433   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:13:54.962187   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:13:54.962269   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:13:54.972664   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:13:54.972746   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:13:54.982593   14863 logs.go:276] 0 containers: []
	W0415 05:13:54.982605   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:13:54.982676   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:13:54.993021   14863 logs.go:276] 0 containers: []
	W0415 05:13:54.993036   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:13:54.993044   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:13:54.993050   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:13:55.032943   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:13:55.032958   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:13:55.037790   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:13:55.037797   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:13:55.064032   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:13:55.064043   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:13:55.075793   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:13:55.075804   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:13:55.094243   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:13:55.094253   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:13:55.133337   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:13:55.133349   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:13:55.147472   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:13:55.147481   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:13:55.165799   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:13:55.165809   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:13:55.187511   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:13:55.187522   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:13:55.199107   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:13:55.199117   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:13:55.214031   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:13:55.214042   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:13:55.231801   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:13:55.231812   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:13:55.243229   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:13:55.243240   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:13:55.259397   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:13:55.259406   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:57.785473   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:00.317069   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:02.787654   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:02.787971   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:02.825626   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:14:02.825781   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:02.852147   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:14:02.852244   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:02.866221   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:14:02.866319   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:02.878314   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:14:02.878394   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:02.888447   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:14:02.888525   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:02.898909   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:14:02.898987   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:02.908991   14863 logs.go:276] 0 containers: []
	W0415 05:14:02.909003   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:02.909070   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:02.919880   14863 logs.go:276] 0 containers: []
	W0415 05:14:02.919893   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:14:02.919903   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:14:02.919908   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:14:02.944748   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:14:02.944762   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:14:02.965028   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:14:02.965038   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:02.977075   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:14:02.977088   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:14:02.994033   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:14:02.994043   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:14:03.008340   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:14:03.008372   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:14:03.028114   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:14:03.028124   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:14:03.039491   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:14:03.039503   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:14:03.051312   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:14:03.051325   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:14:03.070803   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:03.070814   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:14:03.108176   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:03.108193   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:14:03.144091   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:03.144102   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:03.148547   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:14:03.148554   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:14:03.163455   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:14:03.163465   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:14:03.174829   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:03.174839   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:05.319422   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:05.319691   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:05.348067   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:14:05.348219   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:05.365907   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:14:05.366013   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:05.379767   14706 logs.go:276] 2 containers: [cade86587dff b1165f66174c]
	I0415 05:14:05.379862   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:05.391132   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:14:05.391207   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:05.401524   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:14:05.401613   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:05.420261   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:14:05.420341   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:05.434758   14706 logs.go:276] 0 containers: []
	W0415 05:14:05.434771   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:05.434840   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:05.447349   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:14:05.447363   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:14:05.447368   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:14:05.458885   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:14:05.458893   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:14:05.476267   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:14:05.476277   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:14:05.488702   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:14:05.488711   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:14:05.503737   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:14:05.503747   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:14:05.515893   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:14:05.515904   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:14:05.527589   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:14:05.527600   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:14:05.544689   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:05.544701   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:05.568994   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:14:05.569007   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:05.580124   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:05.580135   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:14:05.597995   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:14:05.598087   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:14:05.616528   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:05.616535   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:05.621140   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:05.621147   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:14:05.656557   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:14:05.656571   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:14:05.671988   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:14:05.671998   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:14:05.672021   14706 out.go:239] X Problems detected in kubelet:
	W0415 05:14:05.672026   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:14:05.672030   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:14:05.672034   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:14:05.672036   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:14:05.699118   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:10.701226   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:10.701644   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:10.741765   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:14:10.741928   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:10.762486   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:14:10.762603   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:10.779333   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:14:10.779422   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:10.791602   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:14:10.791684   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:10.802287   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:14:10.802363   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:10.812966   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:14:10.813051   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:10.823183   14863 logs.go:276] 0 containers: []
	W0415 05:14:10.823193   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:10.823256   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:10.833546   14863 logs.go:276] 0 containers: []
	W0415 05:14:10.833562   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:14:10.833570   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:10.833577   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:14:10.871440   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:10.871452   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:14:10.906028   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:14:10.906043   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:14:10.917570   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:14:10.917582   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:14:10.934611   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:14:10.934622   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:10.948705   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:14:10.948720   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:14:10.973253   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:10.973266   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:10.996612   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:14:10.996622   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:14:11.010728   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:14:11.010738   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:14:11.023774   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:14:11.023783   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:14:11.038907   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:14:11.038918   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:14:11.058861   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:11.058870   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:11.062972   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:14:11.062980   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:14:11.077442   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:14:11.077453   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:14:11.091547   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:14:11.091557   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:14:13.605571   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:15.674662   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:18.607547   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:18.607686   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:18.624878   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:14:18.624986   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:18.638655   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:14:18.638754   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:18.650632   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:14:18.650707   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:18.660922   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:14:18.661006   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:18.671604   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:14:18.671681   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:18.681772   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:14:18.681855   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:18.692055   14863 logs.go:276] 0 containers: []
	W0415 05:14:18.692067   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:18.692135   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:18.702300   14863 logs.go:276] 0 containers: []
	W0415 05:14:18.702311   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:14:18.702319   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:18.702325   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:18.706498   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:14:18.706505   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:14:18.720561   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:14:18.720572   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:18.732053   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:14:18.732064   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:14:18.750340   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:14:18.750351   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:14:18.761947   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:14:18.761959   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:14:18.779021   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:18.779031   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:14:18.817619   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:18.817628   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:14:18.851825   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:14:18.851835   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:14:18.870593   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:14:18.870605   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:14:18.890316   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:18.890327   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:18.913213   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:14:18.913232   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:14:18.939420   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:14:18.939431   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:14:18.951648   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:14:18.951658   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:14:18.966584   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:14:18.966594   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:14:20.676874   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:20.677223   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:20.710116   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:14:20.710275   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:20.728709   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:14:20.728792   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:20.743119   14706 logs.go:276] 4 containers: [9fa55864ea59 b556fe4e9cb3 cade86587dff b1165f66174c]
	I0415 05:14:20.743203   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:20.755313   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:14:20.755401   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:20.766500   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:14:20.766579   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:20.780557   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:14:20.780636   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:20.791221   14706 logs.go:276] 0 containers: []
	W0415 05:14:20.791231   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:20.791295   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:20.802119   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:14:20.802137   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:20.802142   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:14:20.819859   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:14:20.819951   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:14:20.838603   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:14:20.838610   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:14:20.852255   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:14:20.852265   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:14:20.863598   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:14:20.863609   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:14:20.879077   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:20.879086   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:20.883501   14706 logs.go:123] Gathering logs for coredns [b556fe4e9cb3] ...
	I0415 05:14:20.883508   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b556fe4e9cb3"
	I0415 05:14:20.894432   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:14:20.894442   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:14:20.911585   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:14:20.911598   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:14:20.923181   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:14:20.923193   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:20.935276   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:14:20.935285   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:14:20.956990   14706 logs.go:123] Gathering logs for coredns [9fa55864ea59] ...
	I0415 05:14:20.957002   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa55864ea59"
	I0415 05:14:20.968470   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:14:20.968482   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:14:20.980524   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:14:20.980533   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:14:20.992419   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:20.992429   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:14:21.027234   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:21.027246   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:21.052429   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:14:21.052437   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:14:21.052460   14706 out.go:239] X Problems detected in kubelet:
	W0415 05:14:21.052464   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:14:21.052468   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:14:21.052472   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:14:21.052475   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:14:21.480492   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:26.482692   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:26.482897   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:26.505238   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:14:26.505348   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:26.520943   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:14:26.521029   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:26.533525   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:14:26.533608   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:26.544900   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:14:26.544984   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:26.559345   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:14:26.559430   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:26.570166   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:14:26.570239   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:26.581255   14863 logs.go:276] 0 containers: []
	W0415 05:14:26.581266   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:26.581336   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:26.591790   14863 logs.go:276] 0 containers: []
	W0415 05:14:26.591808   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:14:26.591817   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:26.591823   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:26.596245   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:14:26.596252   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:14:26.614564   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:14:26.614577   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:14:26.625738   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:14:26.625750   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:14:26.643838   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:14:26.643851   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:14:26.664370   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:26.664381   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:14:26.702279   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:14:26.702289   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:14:26.720720   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:14:26.720731   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:14:26.734790   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:14:26.734799   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:14:26.759193   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:14:26.759204   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:14:26.770920   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:14:26.770931   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:14:26.786255   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:26.786265   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:26.810080   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:26.810087   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:14:26.843718   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:14:26.843728   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:26.855661   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:14:26.855673   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:14:29.374991   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:31.056316   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:34.376986   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:34.377114   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:34.389581   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:14:34.389677   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:34.400631   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:14:34.400710   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:34.411198   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:14:34.411273   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:34.422131   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:14:34.422207   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:34.433454   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:14:34.433542   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:34.450964   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:14:34.451038   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:34.461035   14863 logs.go:276] 0 containers: []
	W0415 05:14:34.461049   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:34.461115   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:34.472170   14863 logs.go:276] 0 containers: []
	W0415 05:14:34.472181   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:14:34.472188   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:34.472193   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:14:34.509712   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:34.509722   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:34.514038   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:14:34.514046   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:34.526160   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:14:34.526171   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:14:34.540280   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:14:34.540290   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:14:34.552131   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:34.552142   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:34.575430   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:34.575438   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:14:34.608932   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:14:34.608943   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:14:34.623491   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:14:34.623502   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:14:34.639550   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:14:34.639560   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:14:34.651309   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:14:34.651323   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:14:34.668122   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:14:34.668132   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:14:34.691905   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:14:34.691915   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:14:34.703579   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:14:34.703590   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:14:34.718376   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:14:34.718387   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:14:36.058764   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:36.058997   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:36.085552   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:14:36.085691   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:36.105810   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:14:36.105912   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:36.118826   14706 logs.go:276] 4 containers: [9fa55864ea59 b556fe4e9cb3 cade86587dff b1165f66174c]
	I0415 05:14:36.118916   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:36.130068   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:14:36.130149   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:36.140878   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:14:36.140953   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:36.151237   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:14:36.151311   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:36.161007   14706 logs.go:276] 0 containers: []
	W0415 05:14:36.161020   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:36.161087   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:36.171325   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:14:36.171345   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:14:36.171350   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:14:36.186670   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:14:36.186681   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:14:36.203871   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:36.203880   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:14:36.221880   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:14:36.221974   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:14:36.240524   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:14:36.240531   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:14:36.261216   14706 logs.go:123] Gathering logs for coredns [b556fe4e9cb3] ...
	I0415 05:14:36.261226   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b556fe4e9cb3"
	I0415 05:14:36.272512   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:14:36.272522   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:14:36.284379   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:36.284390   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:36.289036   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:36.289043   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:14:36.324766   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:14:36.324776   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:14:36.338966   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:36.338975   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:36.362869   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:14:36.362881   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:36.374628   14706 logs.go:123] Gathering logs for coredns [9fa55864ea59] ...
	I0415 05:14:36.374639   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa55864ea59"
	I0415 05:14:36.389907   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:14:36.389917   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:14:36.405930   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:14:36.405941   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:14:36.418224   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:14:36.418237   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:14:36.429669   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:14:36.429679   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:14:36.429708   14706 out.go:239] X Problems detected in kubelet:
	W0415 05:14:36.429712   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:14:36.429716   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:14:36.429720   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:14:36.429723   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:14:37.239193   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:42.241267   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:42.241496   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:42.268303   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:14:42.268441   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:42.285741   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:14:42.285843   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:42.304848   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:14:42.304920   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:42.316644   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:14:42.316729   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:42.328261   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:14:42.328354   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:42.347145   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:14:42.347216   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:42.357772   14863 logs.go:276] 0 containers: []
	W0415 05:14:42.357786   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:42.357859   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:42.367875   14863 logs.go:276] 0 containers: []
	W0415 05:14:42.367886   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:14:42.367894   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:42.367918   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:14:42.405119   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:14:42.405129   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:14:42.419735   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:14:42.419744   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:14:42.431572   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:14:42.431582   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:14:42.442873   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:14:42.442884   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:14:42.457660   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:14:42.457670   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:14:42.483787   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:14:42.483805   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:14:42.507883   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:42.507894   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:42.531238   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:14:42.531248   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:14:42.548909   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:14:42.548921   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:14:42.569755   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:42.569765   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:42.573879   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:42.573885   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:14:42.608281   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:14:42.608294   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:14:42.622033   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:14:42.622043   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:14:42.632467   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:14:42.632478   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:45.146620   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:46.433538   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:50.148760   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:50.149078   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:51.435793   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:51.436018   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:51.463215   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:14:51.463352   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:51.485665   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:14:51.485753   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:51.498362   14706 logs.go:276] 4 containers: [9fa55864ea59 b556fe4e9cb3 cade86587dff b1165f66174c]
	I0415 05:14:51.498441   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:51.509483   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:14:51.509555   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:51.519800   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:14:51.519869   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:51.530517   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:14:51.530592   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:51.542320   14706 logs.go:276] 0 containers: []
	W0415 05:14:51.542332   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:51.542400   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:51.553413   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:14:51.553431   14706 logs.go:123] Gathering logs for coredns [9fa55864ea59] ...
	I0415 05:14:51.553440   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa55864ea59"
	I0415 05:14:51.565343   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:14:51.565353   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:14:51.580861   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:51.580872   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:51.604704   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:51.604714   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:14:51.621815   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:14:51.621907   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:14:51.640142   14706 logs.go:123] Gathering logs for coredns [b556fe4e9cb3] ...
	I0415 05:14:51.640147   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b556fe4e9cb3"
	I0415 05:14:51.651589   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:14:51.651602   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:14:51.663358   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:14:51.663368   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:14:51.675259   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:51.675270   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:51.680406   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:14:51.680416   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:14:51.694822   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:14:51.694833   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:14:51.711661   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:14:51.711672   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:51.723731   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:51.723742   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:14:51.758843   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:14:51.758853   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:14:51.777720   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:14:51.777731   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:14:51.789357   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:14:51.789366   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:14:51.813243   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:14:51.813252   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:14:51.813277   14706 out.go:239] X Problems detected in kubelet:
	W0415 05:14:51.813281   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:14:51.813284   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:14:51.813290   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:14:51.813293   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:14:50.177130   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:14:50.177276   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:50.195686   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:14:50.195796   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:50.209820   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:14:50.209904   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:50.222609   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:14:50.222679   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:50.232857   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:14:50.232917   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:50.243532   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:14:50.243620   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:50.255807   14863 logs.go:276] 0 containers: []
	W0415 05:14:50.255820   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:50.255890   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:50.265966   14863 logs.go:276] 0 containers: []
	W0415 05:14:50.265978   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:14:50.265991   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:14:50.265996   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:14:50.280182   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:14:50.280192   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:14:50.297549   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:14:50.297560   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:50.312302   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:50.312313   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:14:50.349803   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:50.349815   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:50.353944   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:50.353950   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:14:50.386436   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:14:50.386445   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:14:50.400551   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:14:50.400561   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:14:50.424541   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:14:50.424551   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:14:50.444711   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:14:50.444720   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:14:50.456678   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:50.456689   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:50.480038   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:14:50.480046   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:14:50.493575   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:14:50.493585   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:14:50.504983   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:14:50.504994   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:14:50.517042   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:14:50.517052   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:14:53.034084   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:58.036214   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:58.036402   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:58.053337   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:14:58.053433   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:58.065906   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:14:58.065992   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:58.077163   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:14:58.077230   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:58.087923   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:14:58.088126   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:58.100035   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:14:58.100123   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:58.110752   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:14:58.110836   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:58.120285   14863 logs.go:276] 0 containers: []
	W0415 05:14:58.120299   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:58.120379   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:58.129987   14863 logs.go:276] 0 containers: []
	W0415 05:14:58.129998   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:14:58.130007   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:14:58.130012   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:58.142774   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:14:58.142784   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:14:58.156881   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:14:58.156895   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:14:58.169355   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:14:58.169366   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:14:58.186818   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:14:58.186828   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:14:58.206106   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:58.206115   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:58.229054   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:58.229063   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:58.233265   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:14:58.233272   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:14:58.247346   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:14:58.247357   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:14:58.266089   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:14:58.266100   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:14:58.278119   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:14:58.278130   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:14:58.297707   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:14:58.297718   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:14:58.322209   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:14:58.322219   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:14:58.337955   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:58.337966   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:14:58.373919   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:58.373927   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:15:01.816957   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:00.909091   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:06.819025   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:06.819193   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:15:06.837920   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:15:06.838006   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:15:06.848628   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:15:06.848715   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:15:06.858852   14706 logs.go:276] 4 containers: [9fa55864ea59 b556fe4e9cb3 cade86587dff b1165f66174c]
	I0415 05:15:06.858934   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:15:06.870807   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:15:06.870879   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:15:06.881808   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:15:06.881889   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:15:06.892798   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:15:06.892874   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:15:06.902890   14706 logs.go:276] 0 containers: []
	W0415 05:15:06.902901   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:15:06.902956   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:15:06.913238   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:15:06.913262   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:15:06.913269   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:15:06.917858   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:15:06.917865   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:15:06.964707   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:15:06.964720   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:15:06.977880   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:15:06.977891   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:15:06.996058   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:15:06.996151   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:15:07.015197   14706 logs.go:123] Gathering logs for coredns [9fa55864ea59] ...
	I0415 05:15:07.015205   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa55864ea59"
	I0415 05:15:07.035800   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:15:07.035813   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:15:07.048725   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:15:07.048736   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:15:07.060174   14706 logs.go:123] Gathering logs for coredns [b556fe4e9cb3] ...
	I0415 05:15:07.060187   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b556fe4e9cb3"
	I0415 05:15:07.072682   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:15:07.072695   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:15:07.089248   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:15:07.089260   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:15:07.109729   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:15:07.109739   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:15:07.124969   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:15:07.124981   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:15:07.141939   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:15:07.141951   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:15:07.159215   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:15:07.159225   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:15:07.184308   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:15:07.184316   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:15:07.202193   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:15:07.202206   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:15:07.202238   14706 out.go:239] X Problems detected in kubelet:
	W0415 05:15:07.202242   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:15:07.202255   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:15:07.202262   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:15:07.202265   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:15:05.911387   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:05.911746   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:15:05.942008   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:15:05.942138   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:15:05.959872   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:15:05.959980   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:15:05.973791   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:15:05.973881   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:15:05.985577   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:15:05.985670   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:15:05.996451   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:15:05.996535   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:15:06.018016   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:15:06.018097   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:15:06.028183   14863 logs.go:276] 0 containers: []
	W0415 05:15:06.028194   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:15:06.028253   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:15:06.038793   14863 logs.go:276] 0 containers: []
	W0415 05:15:06.038803   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:15:06.038813   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:15:06.038819   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:15:06.052277   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:15:06.052288   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:15:06.063623   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:15:06.063636   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:15:06.076875   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:15:06.076885   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:15:06.099217   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:15:06.099229   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:15:06.128161   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:15:06.128171   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:15:06.145546   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:15:06.145559   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:15:06.157245   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:15:06.157256   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:15:06.161330   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:15:06.161336   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:15:06.198088   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:15:06.198098   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:15:06.212116   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:15:06.212126   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:15:06.230509   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:15:06.230520   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:15:06.250436   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:15:06.250447   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:15:06.288003   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:15:06.288012   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:15:06.302322   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:15:06.302331   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:15:08.821359   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:13.822500   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:13.822802   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:15:13.855669   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:15:13.855829   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:15:13.874592   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:15:13.874706   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:15:13.892817   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:15:13.892898   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:15:13.904616   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:15:13.904696   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:15:13.915245   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:15:13.915322   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:15:13.925469   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:15:13.925557   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:15:13.935296   14863 logs.go:276] 0 containers: []
	W0415 05:15:13.935311   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:15:13.935380   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:15:13.946081   14863 logs.go:276] 0 containers: []
	W0415 05:15:13.946091   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:15:13.946099   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:15:13.946104   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:15:13.967473   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:15:13.967481   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:15:14.010522   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:15:14.010534   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:15:14.026606   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:15:14.026618   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:15:14.039075   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:15:14.039088   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:15:14.051563   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:15:14.051575   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:15:14.075417   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:15:14.075428   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:15:14.086514   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:15:14.086526   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:15:14.107217   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:15:14.107229   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:15:14.146069   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:15:14.146080   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:15:14.160799   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:15:14.160810   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:15:14.172493   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:15:14.172504   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:15:14.176627   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:15:14.176634   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:15:14.190928   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:15:14.190939   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:15:14.206120   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:15:14.206147   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:15:17.205950   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:16.725409   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:22.207935   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:22.208048   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:15:22.219558   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:15:22.219645   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:15:22.231784   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:15:22.231863   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:15:22.243369   14706 logs.go:276] 4 containers: [9fa55864ea59 b556fe4e9cb3 cade86587dff b1165f66174c]
	I0415 05:15:22.243449   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:15:22.256135   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:15:22.256218   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:15:22.267446   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:15:22.267532   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:15:22.279343   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:15:22.279447   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:15:22.290984   14706 logs.go:276] 0 containers: []
	W0415 05:15:22.290997   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:15:22.291070   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:15:22.302905   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:15:22.302926   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:15:22.302931   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:15:22.328152   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:15:22.328168   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:15:22.348035   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:15:22.348051   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:15:22.362121   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:15:22.362133   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:15:22.367498   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:15:22.367511   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:15:22.410213   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:15:22.410225   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:15:22.424960   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:15:22.424973   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:15:22.437037   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:15:22.437048   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:15:22.449125   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:15:22.449137   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:15:22.461183   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:15:22.461196   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:15:22.481941   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:15:22.481955   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:15:22.494512   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:15:22.494527   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:15:21.727874   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:21.727984   14863 kubeadm.go:591] duration metric: took 4m3.440115541s to restartPrimaryControlPlane
	W0415 05:15:21.728068   14863 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0415 05:15:21.728106   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0415 05:15:22.706230   14863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 05:15:22.711152   14863 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 05:15:22.714223   14863 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 05:15:22.716804   14863 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 05:15:22.716810   14863 kubeadm.go:156] found existing configuration files:
	
	I0415 05:15:22.716842   14863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/admin.conf
	I0415 05:15:22.719434   14863 kubeadm.go:162] "https://control-plane.minikube.internal:52436" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 05:15:22.719461   14863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 05:15:22.722625   14863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/kubelet.conf
	I0415 05:15:22.725142   14863 kubeadm.go:162] "https://control-plane.minikube.internal:52436" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 05:15:22.725165   14863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 05:15:22.727963   14863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/controller-manager.conf
	I0415 05:15:22.731288   14863 kubeadm.go:162] "https://control-plane.minikube.internal:52436" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 05:15:22.731319   14863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 05:15:22.734566   14863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/scheduler.conf
	I0415 05:15:22.737202   14863 kubeadm.go:162] "https://control-plane.minikube.internal:52436" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 05:15:22.737236   14863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 05:15:22.739883   14863 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0415 05:15:22.758895   14863 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0415 05:15:22.758930   14863 kubeadm.go:309] [preflight] Running pre-flight checks
	I0415 05:15:22.806900   14863 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 05:15:22.806956   14863 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 05:15:22.807006   14863 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 05:15:22.855606   14863 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 05:15:22.860787   14863 out.go:204]   - Generating certificates and keys ...
	I0415 05:15:22.860855   14863 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0415 05:15:22.860899   14863 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0415 05:15:22.860935   14863 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0415 05:15:22.860966   14863 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0415 05:15:22.861003   14863 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0415 05:15:22.861036   14863 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0415 05:15:22.861073   14863 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0415 05:15:22.861105   14863 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0415 05:15:22.861152   14863 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0415 05:15:22.861194   14863 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0415 05:15:22.861213   14863 kubeadm.go:309] [certs] Using the existing "sa" key
	I0415 05:15:22.861240   14863 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 05:15:23.035702   14863 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 05:15:23.082763   14863 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 05:15:23.132643   14863 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 05:15:23.176657   14863 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 05:15:23.206560   14863 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 05:15:23.206894   14863 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 05:15:23.206966   14863 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0415 05:15:23.294768   14863 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 05:15:23.298990   14863 out.go:204]   - Booting up control plane ...
	I0415 05:15:23.299039   14863 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 05:15:23.299079   14863 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 05:15:23.299113   14863 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 05:15:23.299155   14863 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 05:15:23.299237   14863 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	W0415 05:15:22.514802   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:15:22.514901   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:15:22.534341   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:15:22.534360   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:15:22.549015   14706 logs.go:123] Gathering logs for coredns [9fa55864ea59] ...
	I0415 05:15:22.549029   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa55864ea59"
	I0415 05:15:22.564443   14706 logs.go:123] Gathering logs for coredns [b556fe4e9cb3] ...
	I0415 05:15:22.564454   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b556fe4e9cb3"
	I0415 05:15:22.576349   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:15:22.576359   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:15:22.576390   14706 out.go:239] X Problems detected in kubelet:
	W0415 05:15:22.576396   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:15:22.576399   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:15:22.576403   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:15:22.576405   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:15:27.299919   14863 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.001250 seconds
	I0415 05:15:27.300030   14863 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 05:15:27.304410   14863 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 05:15:27.812228   14863 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0415 05:15:27.812341   14863 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-413000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 05:15:28.316333   14863 kubeadm.go:309] [bootstrap-token] Using token: 4kyxh7.4qpbypddm1axy9ri
	I0415 05:15:28.322249   14863 out.go:204]   - Configuring RBAC rules ...
	I0415 05:15:28.322310   14863 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 05:15:28.322354   14863 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 05:15:28.329037   14863 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 05:15:28.329999   14863 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 05:15:28.330914   14863 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 05:15:28.331754   14863 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 05:15:28.335165   14863 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 05:15:28.523420   14863 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0415 05:15:28.720363   14863 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0415 05:15:28.720688   14863 kubeadm.go:309] 
	I0415 05:15:28.720716   14863 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0415 05:15:28.720721   14863 kubeadm.go:309] 
	I0415 05:15:28.720755   14863 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0415 05:15:28.720758   14863 kubeadm.go:309] 
	I0415 05:15:28.720768   14863 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0415 05:15:28.720793   14863 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 05:15:28.720826   14863 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 05:15:28.720831   14863 kubeadm.go:309] 
	I0415 05:15:28.720869   14863 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0415 05:15:28.720873   14863 kubeadm.go:309] 
	I0415 05:15:28.720895   14863 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 05:15:28.720897   14863 kubeadm.go:309] 
	I0415 05:15:28.720924   14863 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0415 05:15:28.720963   14863 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 05:15:28.721007   14863 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 05:15:28.721012   14863 kubeadm.go:309] 
	I0415 05:15:28.721069   14863 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0415 05:15:28.721112   14863 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0415 05:15:28.721116   14863 kubeadm.go:309] 
	I0415 05:15:28.721154   14863 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4kyxh7.4qpbypddm1axy9ri \
	I0415 05:15:28.721215   14863 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bccfbc6802506c01fbf9bb31ef37a61b86ce5e1e10fb5fd68834dd4a5a1b8043 \
	I0415 05:15:28.721233   14863 kubeadm.go:309] 	--control-plane 
	I0415 05:15:28.721241   14863 kubeadm.go:309] 
	I0415 05:15:28.721286   14863 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0415 05:15:28.721290   14863 kubeadm.go:309] 
	I0415 05:15:28.721344   14863 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4kyxh7.4qpbypddm1axy9ri \
	I0415 05:15:28.721398   14863 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bccfbc6802506c01fbf9bb31ef37a61b86ce5e1e10fb5fd68834dd4a5a1b8043 
	I0415 05:15:28.721678   14863 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 05:15:28.721687   14863 cni.go:84] Creating CNI manager for ""
	I0415 05:15:28.721696   14863 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:15:28.725220   14863 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0415 05:15:28.732222   14863 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0415 05:15:28.735278   14863 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0415 05:15:28.740205   14863 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0415 05:15:28.740305   14863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 05:15:28.740306   14863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-413000 minikube.k8s.io/updated_at=2024_04_15T05_15_28_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=fd91a0c5dbcf69c10661a6c45f66c039ce7b5f02 minikube.k8s.io/name=stopped-upgrade-413000 minikube.k8s.io/primary=true
	I0415 05:15:28.779985   14863 ops.go:34] apiserver oom_adj: -16
	I0415 05:15:28.788472   14863 kubeadm.go:1107] duration metric: took 48.235417ms to wait for elevateKubeSystemPrivileges
	W0415 05:15:28.788502   14863 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0415 05:15:28.788506   14863 kubeadm.go:393] duration metric: took 4m10.514978042s to StartCluster
	I0415 05:15:28.788517   14863 settings.go:142] acquiring lock: {Name:mkd1677925079a7213680748c8d1dd3d4ca25e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:15:28.788603   14863 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:15:28.789018   14863 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/kubeconfig: {Name:mk14f6faef80a57910c6a54d13a440297d06327c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:15:28.789226   14863 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:15:28.792261   14863 out.go:177] * Verifying Kubernetes components...
	I0415 05:15:28.789233   14863 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0415 05:15:28.789305   14863 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:15:28.800262   14863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 05:15:28.800296   14863 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-413000"
	I0415 05:15:28.800298   14863 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-413000"
	I0415 05:15:28.800308   14863 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-413000"
	I0415 05:15:28.800311   14863 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-413000"
	W0415 05:15:28.800314   14863 addons.go:243] addon storage-provisioner should already be in state true
	I0415 05:15:28.800332   14863 host.go:66] Checking if "stopped-upgrade-413000" exists ...
	I0415 05:15:28.805230   14863 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 05:15:28.809246   14863 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 05:15:28.809252   14863 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 05:15:28.809261   14863 sshutil.go:53] new ssh client: &{IP:localhost Port:52403 SSHKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	I0415 05:15:28.810550   14863 kapi.go:59] client config for stopped-upgrade-413000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/client.key", CAFile:"/Users/jenkins/minikube-integration/18644-12307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103c93c80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 05:15:28.810669   14863 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-413000"
	W0415 05:15:28.810675   14863 addons.go:243] addon default-storageclass should already be in state true
	I0415 05:15:28.810686   14863 host.go:66] Checking if "stopped-upgrade-413000" exists ...
	I0415 05:15:28.811351   14863 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 05:15:28.811356   14863 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 05:15:28.811360   14863 sshutil.go:53] new ssh client: &{IP:localhost Port:52403 SSHKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	I0415 05:15:28.872906   14863 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 05:15:28.877838   14863 api_server.go:52] waiting for apiserver process to appear ...
	I0415 05:15:28.877887   14863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 05:15:28.886058   14863 api_server.go:72] duration metric: took 96.824167ms to wait for apiserver process to appear ...
	I0415 05:15:28.886070   14863 api_server.go:88] waiting for apiserver healthz status ...
	I0415 05:15:28.886078   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:28.886281   14863 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 05:15:28.889184   14863 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 05:15:33.888050   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:33.888135   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:32.578563   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:38.888609   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:38.888633   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:37.580639   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:37.580786   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:15:37.593914   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:15:37.593996   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:15:37.605210   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:15:37.605310   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:15:37.616283   14706 logs.go:276] 4 containers: [9fa55864ea59 b556fe4e9cb3 cade86587dff b1165f66174c]
	I0415 05:15:37.616367   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:15:37.626741   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:15:37.626824   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:15:37.637712   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:15:37.637785   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:15:37.647808   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:15:37.647885   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:15:37.658041   14706 logs.go:276] 0 containers: []
	W0415 05:15:37.658053   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:15:37.658127   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:15:37.674336   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:15:37.674352   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:15:37.674358   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:15:37.680805   14706 logs.go:123] Gathering logs for coredns [9fa55864ea59] ...
	I0415 05:15:37.680816   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa55864ea59"
	I0415 05:15:37.692410   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:15:37.692420   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:15:37.703507   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:15:37.703517   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:15:37.715344   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:15:37.715353   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:15:37.738647   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:15:37.738656   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:15:37.755686   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:15:37.755697   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:15:37.769555   14706 logs.go:123] Gathering logs for coredns [b556fe4e9cb3] ...
	I0415 05:15:37.769565   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b556fe4e9cb3"
	I0415 05:15:37.781071   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:15:37.781081   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:15:37.796197   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:15:37.796207   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:15:37.808183   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:15:37.808193   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:15:37.825676   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:15:37.825769   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:15:37.844329   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:15:37.844337   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:15:37.880561   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:15:37.880572   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:15:37.896007   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:15:37.896019   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:15:37.917208   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:15:37.917218   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:15:37.929347   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:15:37.929358   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:15:37.929383   14706 out.go:239] X Problems detected in kubelet:
	W0415 05:15:37.929388   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:15:37.929392   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:15:37.929396   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:15:37.929398   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:15:43.888864   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:43.888886   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:48.889239   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:48.889264   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:47.933126   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:53.889764   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:53.889787   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:52.935170   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:52.935291   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:15:52.951822   14706 logs.go:276] 1 containers: [7843ee650133]
	I0415 05:15:52.951909   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:15:52.962431   14706 logs.go:276] 1 containers: [1c339b8f44b5]
	I0415 05:15:52.962511   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:15:52.973720   14706 logs.go:276] 4 containers: [9fa55864ea59 b556fe4e9cb3 cade86587dff b1165f66174c]
	I0415 05:15:52.973810   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:15:52.984465   14706 logs.go:276] 1 containers: [ab183a09180b]
	I0415 05:15:52.984544   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:15:52.995127   14706 logs.go:276] 1 containers: [88adc3f2c9ac]
	I0415 05:15:52.995203   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:15:53.006048   14706 logs.go:276] 1 containers: [0625fbf211d0]
	I0415 05:15:53.006136   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:15:53.016234   14706 logs.go:276] 0 containers: []
	W0415 05:15:53.016244   14706 logs.go:278] No container was found matching "kindnet"
	I0415 05:15:53.016314   14706 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:15:53.026805   14706 logs.go:276] 1 containers: [d09ac7081a1b]
	I0415 05:15:53.026824   14706 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:15:53.026830   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:15:53.061356   14706 logs.go:123] Gathering logs for coredns [b1165f66174c] ...
	I0415 05:15:53.061368   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1165f66174c"
	I0415 05:15:53.073972   14706 logs.go:123] Gathering logs for kube-proxy [88adc3f2c9ac] ...
	I0415 05:15:53.073983   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88adc3f2c9ac"
	I0415 05:15:53.085625   14706 logs.go:123] Gathering logs for kube-controller-manager [0625fbf211d0] ...
	I0415 05:15:53.085637   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0625fbf211d0"
	I0415 05:15:53.107397   14706 logs.go:123] Gathering logs for storage-provisioner [d09ac7081a1b] ...
	I0415 05:15:53.107408   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09ac7081a1b"
	I0415 05:15:53.125189   14706 logs.go:123] Gathering logs for kubelet ...
	I0415 05:15:53.125199   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 05:15:53.145085   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:15:53.145177   14706 logs.go:138] Found kubelet problem: Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:15:53.164510   14706 logs.go:123] Gathering logs for dmesg ...
	I0415 05:15:53.164533   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:15:53.170883   14706 logs.go:123] Gathering logs for kube-scheduler [ab183a09180b] ...
	I0415 05:15:53.170896   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab183a09180b"
	I0415 05:15:53.195576   14706 logs.go:123] Gathering logs for kube-apiserver [7843ee650133] ...
	I0415 05:15:53.195588   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7843ee650133"
	I0415 05:15:53.209721   14706 logs.go:123] Gathering logs for etcd [1c339b8f44b5] ...
	I0415 05:15:53.209732   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c339b8f44b5"
	I0415 05:15:53.223805   14706 logs.go:123] Gathering logs for coredns [cade86587dff] ...
	I0415 05:15:53.223817   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cade86587dff"
	I0415 05:15:53.235855   14706 logs.go:123] Gathering logs for container status ...
	I0415 05:15:53.235865   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:15:53.247717   14706 logs.go:123] Gathering logs for coredns [9fa55864ea59] ...
	I0415 05:15:53.247728   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa55864ea59"
	I0415 05:15:53.259720   14706 logs.go:123] Gathering logs for coredns [b556fe4e9cb3] ...
	I0415 05:15:53.259731   14706 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b556fe4e9cb3"
	I0415 05:15:53.271522   14706 logs.go:123] Gathering logs for Docker ...
	I0415 05:15:53.271532   14706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:15:53.294534   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:15:53.294543   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 05:15:53.294571   14706 out.go:239] X Problems detected in kubelet:
	W0415 05:15:53.294576   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: W0415 12:08:07.694612    4077 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	W0415 05:15:53.294579   14706 out.go:239]   Apr 15 12:08:07 running-upgrade-921000 kubelet[4077]: E0415 12:08:07.694631    4077 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-921000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-921000' and this object
	I0415 05:15:53.294584   14706 out.go:304] Setting ErrFile to fd 2...
	I0415 05:15:53.294586   14706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:15:58.890687   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:58.890709   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0415 05:15:59.287240   14863 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0415 05:15:59.294515   14863 out.go:177] * Enabled addons: storage-provisioner
	I0415 05:15:59.304433   14863 addons.go:505] duration metric: took 30.516468s for enable addons: enabled=[storage-provisioner]
	I0415 05:16:03.891650   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:16:03.891675   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:16:03.297827   14706 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:16:08.300022   14706 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:16:08.306322   14706 out.go:177] 
	W0415 05:16:08.311310   14706 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0415 05:16:08.311322   14706 out.go:239] * 
	W0415 05:16:08.312310   14706 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:16:08.323213   14706 out.go:177] 
	I0415 05:16:08.892913   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:16:08.892942   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:16:13.894612   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:16:13.894658   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:16:18.896808   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:16:18.896833   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-04-15 12:07:10 UTC, ends at Mon 2024-04-15 12:16:24 UTC. --
	Apr 15 12:16:05 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:05Z" level=error msg="Error response from daemon: No such container: b1165f66174c6c90e33833cb695d4a297f4599982412a65a2a5ab96ee4d2f341 Failed to get stats from container b1165f66174c6c90e33833cb695d4a297f4599982412a65a2a5ab96ee4d2f341"
	Apr 15 12:16:05 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:05Z" level=error msg="ContainerStats resp: {<nil> }"
	Apr 15 12:16:05 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:05Z" level=error msg="Error response from daemon: No such container: cade86587dfff41e598162525e00ab21153879397b7e6b4f993a7a107a40a51f Failed to get stats from container cade86587dfff41e598162525e00ab21153879397b7e6b4f993a7a107a40a51f"
	Apr 15 12:16:06 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:06Z" level=error msg="ContainerStats resp: {0x40008d8e80 linux}"
	Apr 15 12:16:07 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:07Z" level=error msg="ContainerStats resp: {0x4000822b00 linux}"
	Apr 15 12:16:07 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:07Z" level=error msg="ContainerStats resp: {0x4000822f80 linux}"
	Apr 15 12:16:07 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:07Z" level=error msg="ContainerStats resp: {0x40009db140 linux}"
	Apr 15 12:16:07 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:07Z" level=error msg="ContainerStats resp: {0x4000823780 linux}"
	Apr 15 12:16:07 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:07Z" level=error msg="ContainerStats resp: {0x40009dbdc0 linux}"
	Apr 15 12:16:07 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:07Z" level=error msg="ContainerStats resp: {0x40009dbf00 linux}"
	Apr 15 12:16:07 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:07Z" level=error msg="ContainerStats resp: {0x400082ee00 linux}"
	Apr 15 12:16:09 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:09Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Apr 15 12:16:14 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:14Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Apr 15 12:16:17 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:17Z" level=error msg="ContainerStats resp: {0x4000a90d80 linux}"
	Apr 15 12:16:17 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:17Z" level=error msg="ContainerStats resp: {0x4000a91480 linux}"
	Apr 15 12:16:18 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:18Z" level=error msg="ContainerStats resp: {0x40004389c0 linux}"
	Apr 15 12:16:19 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:19Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Apr 15 12:16:19 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:19Z" level=error msg="ContainerStats resp: {0x4000822380 linux}"
	Apr 15 12:16:19 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:19Z" level=error msg="ContainerStats resp: {0x4000439d40 linux}"
	Apr 15 12:16:19 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:19Z" level=error msg="ContainerStats resp: {0x40001e4440 linux}"
	Apr 15 12:16:19 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:19Z" level=error msg="ContainerStats resp: {0x400082e180 linux}"
	Apr 15 12:16:19 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:19Z" level=error msg="ContainerStats resp: {0x4000823380 linux}"
	Apr 15 12:16:19 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:19Z" level=error msg="ContainerStats resp: {0x4000823540 linux}"
	Apr 15 12:16:19 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:19Z" level=error msg="ContainerStats resp: {0x4000823c80 linux}"
	Apr 15 12:16:24 running-upgrade-921000 cri-dockerd[3163]: time="2024-04-15T12:16:24Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	37e049dd5198a       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   9b4b7fc17cfeb
	bc76fe8348c5b       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   84897f0558569
	9fa55864ea59b       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   84897f0558569
	b556fe4e9cb30       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   9b4b7fc17cfeb
	88adc3f2c9ace       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   190cc382606aa
	d09ac7081a1be       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   f545e75938a10
	7843ee6501338       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   32c3134077e2c
	1c339b8f44b52       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   9a440849c15f7
	0625fbf211d0e       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   281940c92b7ce
	ab183a09180be       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   bdd3aa8d12bd7
	
	
	==> coredns [37e049dd5198] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4362272099244356581.1877686113644794210. HINFO: read udp 10.244.0.2:56282->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4362272099244356581.1877686113644794210. HINFO: read udp 10.244.0.2:35474->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4362272099244356581.1877686113644794210. HINFO: read udp 10.244.0.2:40066->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4362272099244356581.1877686113644794210. HINFO: read udp 10.244.0.2:42883->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4362272099244356581.1877686113644794210. HINFO: read udp 10.244.0.2:43495->10.0.2.3:53: i/o timeout
	
	
	==> coredns [9fa55864ea59] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3573069501893329899.6003989778480996893. HINFO: read udp 10.244.0.3:35192->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3573069501893329899.6003989778480996893. HINFO: read udp 10.244.0.3:60871->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3573069501893329899.6003989778480996893. HINFO: read udp 10.244.0.3:33093->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3573069501893329899.6003989778480996893. HINFO: read udp 10.244.0.3:47035->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3573069501893329899.6003989778480996893. HINFO: read udp 10.244.0.3:48345->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3573069501893329899.6003989778480996893. HINFO: read udp 10.244.0.3:54626->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3573069501893329899.6003989778480996893. HINFO: read udp 10.244.0.3:57587->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3573069501893329899.6003989778480996893. HINFO: read udp 10.244.0.3:38521->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3573069501893329899.6003989778480996893. HINFO: read udp 10.244.0.3:57562->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3573069501893329899.6003989778480996893. HINFO: read udp 10.244.0.3:57114->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b556fe4e9cb3] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 691003085203811812.4405288550164964771. HINFO: read udp 10.244.0.2:35123->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 691003085203811812.4405288550164964771. HINFO: read udp 10.244.0.2:47732->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 691003085203811812.4405288550164964771. HINFO: read udp 10.244.0.2:34915->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 691003085203811812.4405288550164964771. HINFO: read udp 10.244.0.2:49254->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 691003085203811812.4405288550164964771. HINFO: read udp 10.244.0.2:54596->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 691003085203811812.4405288550164964771. HINFO: read udp 10.244.0.2:47811->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bc76fe8348c5] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2944520977275295899.5202583989692258717. HINFO: read udp 10.244.0.3:46210->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2944520977275295899.5202583989692258717. HINFO: read udp 10.244.0.3:57752->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2944520977275295899.5202583989692258717. HINFO: read udp 10.244.0.3:59375->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2944520977275295899.5202583989692258717. HINFO: read udp 10.244.0.3:57618->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2944520977275295899.5202583989692258717. HINFO: read udp 10.244.0.3:55778->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-921000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-921000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd91a0c5dbcf69c10661a6c45f66c039ce7b5f02
	                    minikube.k8s.io/name=running-upgrade-921000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T05_12_03_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 12:12:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-921000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 12:16:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 12:12:03 +0000   Mon, 15 Apr 2024 12:11:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 12:12:03 +0000   Mon, 15 Apr 2024 12:11:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 12:12:03 +0000   Mon, 15 Apr 2024 12:11:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 12:12:03 +0000   Mon, 15 Apr 2024 12:12:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-921000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 84ba6aab704244ed82da21cf3a6de3b2
	  System UUID:                84ba6aab704244ed82da21cf3a6de3b2
	  Boot ID:                    da5c8dcd-3f3e-42b9-8363-d8bbf2091a0f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-nn4lw                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m8s
	  kube-system                 coredns-6d4b75cb6d-nt5wc                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m8s
	  kube-system                 etcd-running-upgrade-921000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m23s
	  kube-system                 kube-apiserver-running-upgrade-921000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-controller-manager-running-upgrade-921000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 kube-proxy-vd7dw                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-scheduler-running-upgrade-921000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m26s (x4 over 4m26s)  kubelet          Node running-upgrade-921000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s (x4 over 4m26s)  kubelet          Node running-upgrade-921000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x4 over 4m26s)  kubelet          Node running-upgrade-921000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m21s                  kubelet          Node running-upgrade-921000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m21s                  kubelet          Node running-upgrade-921000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s                  kubelet          Node running-upgrade-921000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m21s                  kubelet          Node running-upgrade-921000 status is now: NodeReady
	  Normal  Starting                 4m21s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m8s                   node-controller  Node running-upgrade-921000 event: Registered Node running-upgrade-921000 in Controller
	
	
	==> dmesg <==
	[  +1.780912] systemd-fstab-generator[876]: Ignoring "noauto" for root device
	[  +0.060553] systemd-fstab-generator[887]: Ignoring "noauto" for root device
	[  +0.068539] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.139785] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.060016] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.055497] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +2.732915] systemd-fstab-generator[1285]: Ignoring "noauto" for root device
	[  +9.143350] systemd-fstab-generator[1925]: Ignoring "noauto" for root device
	[  +2.541043] systemd-fstab-generator[2204]: Ignoring "noauto" for root device
	[  +0.145979] systemd-fstab-generator[2238]: Ignoring "noauto" for root device
	[  +0.093126] systemd-fstab-generator[2252]: Ignoring "noauto" for root device
	[  +0.089291] systemd-fstab-generator[2266]: Ignoring "noauto" for root device
	[  +4.273838] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.196762] systemd-fstab-generator[3118]: Ignoring "noauto" for root device
	[  +0.072525] systemd-fstab-generator[3131]: Ignoring "noauto" for root device
	[  +0.065793] systemd-fstab-generator[3142]: Ignoring "noauto" for root device
	[  +0.076643] systemd-fstab-generator[3156]: Ignoring "noauto" for root device
	[  +1.986961] systemd-fstab-generator[3305]: Ignoring "noauto" for root device
	[  +2.577060] systemd-fstab-generator[3790]: Ignoring "noauto" for root device
	[  +2.170824] systemd-fstab-generator[4071]: Ignoring "noauto" for root device
	[Apr15 12:08] kauditd_printk_skb: 68 callbacks suppressed
	[Apr15 12:11] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.395477] systemd-fstab-generator[10366]: Ignoring "noauto" for root device
	[Apr15 12:12] systemd-fstab-generator[10962]: Ignoring "noauto" for root device
	[  +0.467080] systemd-fstab-generator[11098]: Ignoring "noauto" for root device
	
	
	==> etcd [1c339b8f44b5] <==
	{"level":"info","ts":"2024-04-15T12:11:59.437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-04-15T12:11:59.437Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-04-15T12:11:59.442Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-15T12:11:59.445Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-04-15T12:11:59.446Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-04-15T12:11:59.446Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-15T12:11:59.446Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-15T12:11:59.804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-15T12:11:59.804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-15T12:11:59.804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-04-15T12:11:59.804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-04-15T12:11:59.804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-04-15T12:11:59.804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-04-15T12:11:59.804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-04-15T12:11:59.804Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T12:11:59.805Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T12:11:59.805Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-921000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-15T12:11:59.805Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-15T12:11:59.805Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-04-15T12:11:59.805Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T12:11:59.805Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T12:11:59.809Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-15T12:11:59.809Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-15T12:11:59.815Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-15T12:11:59.815Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 12:16:24 up 9 min,  0 users,  load average: 1.27, 0.56, 0.26
	Linux running-upgrade-921000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [7843ee650133] <==
	I0415 12:12:01.038670       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0415 12:12:01.038705       1 cache.go:39] Caches are synced for autoregister controller
	I0415 12:12:01.038712       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0415 12:12:01.038841       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0415 12:12:01.039009       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0415 12:12:01.039021       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0415 12:12:01.052034       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0415 12:12:01.773689       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0415 12:12:01.948374       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0415 12:12:01.951859       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0415 12:12:01.951954       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0415 12:12:02.093954       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0415 12:12:02.104205       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0415 12:12:02.205638       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0415 12:12:02.207683       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0415 12:12:02.208145       1 controller.go:611] quota admission added evaluator for: endpoints
	I0415 12:12:02.209586       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0415 12:12:03.076624       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0415 12:12:03.778469       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0415 12:12:03.781456       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0415 12:12:03.785599       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0415 12:12:03.832588       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0415 12:12:16.582461       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0415 12:12:16.829547       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0415 12:12:17.640434       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [0625fbf211d0] <==
	I0415 12:12:16.089474       1 shared_informer.go:262] Caches are synced for TTL
	I0415 12:12:16.090048       1 shared_informer.go:262] Caches are synced for taint
	I0415 12:12:16.090102       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0415 12:12:16.090141       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-921000. Assuming now as a timestamp.
	I0415 12:12:16.090162       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0415 12:12:16.090215       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0415 12:12:16.090321       1 event.go:294] "Event occurred" object="running-upgrade-921000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-921000 event: Registered Node running-upgrade-921000 in Controller"
	I0415 12:12:16.097484       1 shared_informer.go:262] Caches are synced for resource quota
	I0415 12:12:16.123484       1 shared_informer.go:262] Caches are synced for persistent volume
	I0415 12:12:16.123544       1 shared_informer.go:262] Caches are synced for daemon sets
	I0415 12:12:16.130224       1 shared_informer.go:262] Caches are synced for resource quota
	I0415 12:12:16.154231       1 shared_informer.go:262] Caches are synced for node
	I0415 12:12:16.154246       1 range_allocator.go:173] Starting range CIDR allocator
	I0415 12:12:16.154248       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0415 12:12:16.154251       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0415 12:12:16.156812       1 range_allocator.go:374] Set node running-upgrade-921000 PodCIDR to [10.244.0.0/24]
	I0415 12:12:16.159992       1 shared_informer.go:262] Caches are synced for attach detach
	I0415 12:12:16.172422       1 shared_informer.go:262] Caches are synced for GC
	I0415 12:12:16.547259       1 shared_informer.go:262] Caches are synced for garbage collector
	I0415 12:12:16.572675       1 shared_informer.go:262] Caches are synced for garbage collector
	I0415 12:12:16.572690       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0415 12:12:16.584955       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0415 12:12:16.832932       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vd7dw"
	I0415 12:12:16.930554       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-nt5wc"
	I0415 12:12:16.934633       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-nn4lw"
	
	
	==> kube-proxy [88adc3f2c9ac] <==
	I0415 12:12:17.627062       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0415 12:12:17.627110       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0415 12:12:17.627131       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0415 12:12:17.637955       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0415 12:12:17.637969       1 server_others.go:206] "Using iptables Proxier"
	I0415 12:12:17.638086       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0415 12:12:17.638252       1 server.go:661] "Version info" version="v1.24.1"
	I0415 12:12:17.638263       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 12:12:17.638654       1 config.go:317] "Starting service config controller"
	I0415 12:12:17.638671       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0415 12:12:17.638709       1 config.go:226] "Starting endpoint slice config controller"
	I0415 12:12:17.638715       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0415 12:12:17.639103       1 config.go:444] "Starting node config controller"
	I0415 12:12:17.639132       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0415 12:12:17.740236       1 shared_informer.go:262] Caches are synced for node config
	I0415 12:12:17.740246       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0415 12:12:17.740259       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [ab183a09180b] <==
	W0415 12:12:00.997408       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0415 12:12:00.997411       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0415 12:12:00.997422       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0415 12:12:00.997426       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0415 12:12:00.997436       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0415 12:12:00.997439       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0415 12:12:00.997454       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 12:12:00.997461       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0415 12:12:00.997519       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 12:12:00.997526       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0415 12:12:01.820855       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0415 12:12:01.821119       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0415 12:12:01.843328       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0415 12:12:01.843392       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0415 12:12:01.900531       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0415 12:12:01.900599       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0415 12:12:01.931023       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0415 12:12:01.931056       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0415 12:12:01.986360       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0415 12:12:01.986537       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0415 12:12:01.991044       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0415 12:12:01.991132       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0415 12:12:02.005523       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 12:12:02.005538       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0415 12:12:02.295680       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-04-15 12:07:10 UTC, ends at Mon 2024-04-15 12:16:24 UTC. --
	Apr 15 12:12:04 running-upgrade-921000 kubelet[10968]: I0415 12:12:04.130376   10968 reconciler.go:157] "Reconciler: start to sync state"
	Apr 15 12:12:04 running-upgrade-921000 kubelet[10968]: E0415 12:12:04.211652   10968 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-921000\" already exists" pod="kube-system/etcd-running-upgrade-921000"
	Apr 15 12:12:16 running-upgrade-921000 kubelet[10968]: I0415 12:12:16.095391   10968 topology_manager.go:200] "Topology Admit Handler"
	Apr 15 12:12:16 running-upgrade-921000 kubelet[10968]: I0415 12:12:16.243783   10968 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 15 12:12:16 running-upgrade-921000 kubelet[10968]: I0415 12:12:16.243854   10968 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cdab1d65-1a59-47cb-9a97-8a4bc5f797e7-tmp\") pod \"storage-provisioner\" (UID: \"cdab1d65-1a59-47cb-9a97-8a4bc5f797e7\") " pod="kube-system/storage-provisioner"
	Apr 15 12:12:16 running-upgrade-921000 kubelet[10968]: I0415 12:12:16.244032   10968 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v296t\" (UniqueName: \"kubernetes.io/projected/cdab1d65-1a59-47cb-9a97-8a4bc5f797e7-kube-api-access-v296t\") pod \"storage-provisioner\" (UID: \"cdab1d65-1a59-47cb-9a97-8a4bc5f797e7\") " pod="kube-system/storage-provisioner"
	Apr 15 12:12:16 running-upgrade-921000 kubelet[10968]: I0415 12:12:16.244242   10968 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 15 12:12:16 running-upgrade-921000 kubelet[10968]: E0415 12:12:16.346793   10968 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Apr 15 12:12:16 running-upgrade-921000 kubelet[10968]: E0415 12:12:16.346815   10968 projected.go:192] Error preparing data for projected volume kube-api-access-v296t for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Apr 15 12:12:16 running-upgrade-921000 kubelet[10968]: E0415 12:12:16.346850   10968 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/cdab1d65-1a59-47cb-9a97-8a4bc5f797e7-kube-api-access-v296t podName:cdab1d65-1a59-47cb-9a97-8a4bc5f797e7 nodeName:}" failed. No retries permitted until 2024-04-15 12:12:16.846836442 +0000 UTC m=+13.078239422 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v296t" (UniqueName: "kubernetes.io/projected/cdab1d65-1a59-47cb-9a97-8a4bc5f797e7-kube-api-access-v296t") pod "storage-provisioner" (UID: "cdab1d65-1a59-47cb-9a97-8a4bc5f797e7") : configmap "kube-root-ca.crt" not found
	Apr 15 12:12:16 running-upgrade-921000 kubelet[10968]: I0415 12:12:16.835011   10968 topology_manager.go:200] "Topology Admit Handler"
	Apr 15 12:12:16 running-upgrade-921000 kubelet[10968]: I0415 12:12:16.933607   10968 topology_manager.go:200] "Topology Admit Handler"
	Apr 15 12:12:16 running-upgrade-921000 kubelet[10968]: I0415 12:12:16.942540   10968 topology_manager.go:200] "Topology Admit Handler"
	Apr 15 12:12:16 running-upgrade-921000 kubelet[10968]: I0415 12:12:16.947696   10968 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39530186-d120-4da5-848a-10fb6a55696e-lib-modules\") pod \"kube-proxy-vd7dw\" (UID: \"39530186-d120-4da5-848a-10fb6a55696e\") " pod="kube-system/kube-proxy-vd7dw"
	Apr 15 12:12:16 running-upgrade-921000 kubelet[10968]: I0415 12:12:16.948283   10968 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/39530186-d120-4da5-848a-10fb6a55696e-kube-proxy\") pod \"kube-proxy-vd7dw\" (UID: \"39530186-d120-4da5-848a-10fb6a55696e\") " pod="kube-system/kube-proxy-vd7dw"
	Apr 15 12:12:16 running-upgrade-921000 kubelet[10968]: I0415 12:12:16.948315   10968 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39530186-d120-4da5-848a-10fb6a55696e-xtables-lock\") pod \"kube-proxy-vd7dw\" (UID: \"39530186-d120-4da5-848a-10fb6a55696e\") " pod="kube-system/kube-proxy-vd7dw"
	Apr 15 12:12:16 running-upgrade-921000 kubelet[10968]: I0415 12:12:16.948335   10968 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wjgm\" (UniqueName: \"kubernetes.io/projected/39530186-d120-4da5-848a-10fb6a55696e-kube-api-access-2wjgm\") pod \"kube-proxy-vd7dw\" (UID: \"39530186-d120-4da5-848a-10fb6a55696e\") " pod="kube-system/kube-proxy-vd7dw"
	Apr 15 12:12:17 running-upgrade-921000 kubelet[10968]: I0415 12:12:17.050250   10968 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw5rq\" (UniqueName: \"kubernetes.io/projected/716136e6-3177-47ad-a326-f8562bc68d70-kube-api-access-pw5rq\") pod \"coredns-6d4b75cb6d-nn4lw\" (UID: \"716136e6-3177-47ad-a326-f8562bc68d70\") " pod="kube-system/coredns-6d4b75cb6d-nn4lw"
	Apr 15 12:12:17 running-upgrade-921000 kubelet[10968]: I0415 12:12:17.050334   10968 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/716136e6-3177-47ad-a326-f8562bc68d70-config-volume\") pod \"coredns-6d4b75cb6d-nn4lw\" (UID: \"716136e6-3177-47ad-a326-f8562bc68d70\") " pod="kube-system/coredns-6d4b75cb6d-nn4lw"
	Apr 15 12:12:17 running-upgrade-921000 kubelet[10968]: I0415 12:12:17.050388   10968 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b9a1ec9-5f60-40c0-8449-e39e6435f80c-config-volume\") pod \"coredns-6d4b75cb6d-nt5wc\" (UID: \"2b9a1ec9-5f60-40c0-8449-e39e6435f80c\") " pod="kube-system/coredns-6d4b75cb6d-nt5wc"
	Apr 15 12:12:17 running-upgrade-921000 kubelet[10968]: I0415 12:12:17.050404   10968 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cwgk\" (UniqueName: \"kubernetes.io/projected/2b9a1ec9-5f60-40c0-8449-e39e6435f80c-kube-api-access-7cwgk\") pod \"coredns-6d4b75cb6d-nt5wc\" (UID: \"2b9a1ec9-5f60-40c0-8449-e39e6435f80c\") " pod="kube-system/coredns-6d4b75cb6d-nt5wc"
	Apr 15 12:12:18 running-upgrade-921000 kubelet[10968]: I0415 12:12:18.076795   10968 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="84897f055856991e4cc4aa9c478640f79221dc12d3c9a3806004fecf7663a1fd"
	Apr 15 12:12:18 running-upgrade-921000 kubelet[10968]: I0415 12:12:18.078322   10968 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="9b4b7fc17cfebedc90c579f772e4bf42a61794c98300ccae4a047eebbe26b0f4"
	Apr 15 12:16:05 running-upgrade-921000 kubelet[10968]: I0415 12:16:05.411485   10968 scope.go:110] "RemoveContainer" containerID="b1165f66174c6c90e33833cb695d4a297f4599982412a65a2a5ab96ee4d2f341"
	Apr 15 12:16:05 running-upgrade-921000 kubelet[10968]: I0415 12:16:05.429659   10968 scope.go:110] "RemoveContainer" containerID="cade86587dfff41e598162525e00ab21153879397b7e6b4f993a7a107a40a51f"
	
	
	==> storage-provisioner [d09ac7081a1b] <==
	I0415 12:12:17.222692       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0415 12:12:17.227601       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0415 12:12:17.227658       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0415 12:12:17.232651       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0415 12:12:17.232795       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"431d8acc-8691-49b9-8c42-12a955a15af2", APIVersion:"v1", ResourceVersion:"362", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-921000_66edd7bc-bfd4-47bb-a0a1-11d80e75c2eb became leader
	I0415 12:12:17.232827       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-921000_66edd7bc-bfd4-47bb-a0a1-11d80e75c2eb!
	I0415 12:12:17.334207       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-921000_66edd7bc-bfd4-47bb-a0a1-11d80e75c2eb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-921000 -n running-upgrade-921000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-921000 -n running-upgrade-921000: exit status 2 (15.647102375s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-921000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-921000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-921000
--- FAIL: TestRunningBinaryUpgrade (595.66s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.18s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-487000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-487000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.803799458s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-487000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-487000" primary control-plane node in "kubernetes-upgrade-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:09:47.717726   14786 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:09:47.717877   14786 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:09:47.717883   14786 out.go:304] Setting ErrFile to fd 2...
	I0415 05:09:47.717885   14786 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:09:47.718014   14786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:09:47.719148   14786 out.go:298] Setting JSON to false
	I0415 05:09:47.735688   14786 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7758,"bootTime":1713175229,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:09:47.735749   14786 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:09:47.741279   14786 out.go:177] * [kubernetes-upgrade-487000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:09:47.749272   14786 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:09:47.753284   14786 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:09:47.749331   14786 notify.go:220] Checking for updates...
	I0415 05:09:47.759146   14786 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:09:47.766196   14786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:09:47.769226   14786 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:09:47.772274   14786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:09:47.774089   14786 config.go:182] Loaded profile config "multinode-671000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:09:47.774157   14786 config.go:182] Loaded profile config "running-upgrade-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:09:47.774203   14786 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:09:47.778233   14786 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 05:09:47.785136   14786 start.go:297] selected driver: qemu2
	I0415 05:09:47.785145   14786 start.go:901] validating driver "qemu2" against <nil>
	I0415 05:09:47.785153   14786 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:09:47.787555   14786 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 05:09:47.790204   14786 out.go:177] * Automatically selected the socket_vmnet network
	I0415 05:09:47.793341   14786 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 05:09:47.793395   14786 cni.go:84] Creating CNI manager for ""
	I0415 05:09:47.793403   14786 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0415 05:09:47.793435   14786 start.go:340] cluster config:
	{Name:kubernetes-upgrade-487000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:09:47.797973   14786 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:09:47.805226   14786 out.go:177] * Starting "kubernetes-upgrade-487000" primary control-plane node in "kubernetes-upgrade-487000" cluster
	I0415 05:09:47.809229   14786 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 05:09:47.809244   14786 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0415 05:09:47.809252   14786 cache.go:56] Caching tarball of preloaded images
	I0415 05:09:47.809311   14786 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:09:47.809325   14786 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0415 05:09:47.809379   14786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/kubernetes-upgrade-487000/config.json ...
	I0415 05:09:47.809394   14786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/kubernetes-upgrade-487000/config.json: {Name:mka940f91a47709465c3381f45c6ef92e97d0976 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:09:47.809645   14786 start.go:360] acquireMachinesLock for kubernetes-upgrade-487000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:09:47.809681   14786 start.go:364] duration metric: took 28.834µs to acquireMachinesLock for "kubernetes-upgrade-487000"
	I0415 05:09:47.809693   14786 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:09:47.809730   14786 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:09:47.813225   14786 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 05:09:47.840537   14786 start.go:159] libmachine.API.Create for "kubernetes-upgrade-487000" (driver="qemu2")
	I0415 05:09:47.840568   14786 client.go:168] LocalClient.Create starting
	I0415 05:09:47.840648   14786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:09:47.840681   14786 main.go:141] libmachine: Decoding PEM data...
	I0415 05:09:47.840691   14786 main.go:141] libmachine: Parsing certificate...
	I0415 05:09:47.840730   14786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:09:47.840753   14786 main.go:141] libmachine: Decoding PEM data...
	I0415 05:09:47.840759   14786 main.go:141] libmachine: Parsing certificate...
	I0415 05:09:47.841111   14786 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:09:47.972960   14786 main.go:141] libmachine: Creating SSH key...
	I0415 05:09:48.074258   14786 main.go:141] libmachine: Creating Disk image...
	I0415 05:09:48.074265   14786 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:09:48.074465   14786 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/disk.qcow2
	I0415 05:09:48.087866   14786 main.go:141] libmachine: STDOUT: 
	I0415 05:09:48.087888   14786 main.go:141] libmachine: STDERR: 
	I0415 05:09:48.087957   14786 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/disk.qcow2 +20000M
	I0415 05:09:48.099236   14786 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:09:48.099258   14786 main.go:141] libmachine: STDERR: 
	I0415 05:09:48.099277   14786 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/disk.qcow2
	I0415 05:09:48.099282   14786 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:09:48.099309   14786 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:4f:dc:4d:0d:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/disk.qcow2
	I0415 05:09:48.101089   14786 main.go:141] libmachine: STDOUT: 
	I0415 05:09:48.101107   14786 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:09:48.101125   14786 client.go:171] duration metric: took 260.562375ms to LocalClient.Create
	I0415 05:09:50.103236   14786 start.go:128] duration metric: took 2.293565416s to createHost
	I0415 05:09:50.103342   14786 start.go:83] releasing machines lock for "kubernetes-upgrade-487000", held for 2.293744375s
	W0415 05:09:50.103442   14786 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:09:50.113974   14786 out.go:177] * Deleting "kubernetes-upgrade-487000" in qemu2 ...
	W0415 05:09:50.135672   14786 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:09:50.135718   14786 start.go:728] Will try again in 5 seconds ...
	I0415 05:09:55.137710   14786 start.go:360] acquireMachinesLock for kubernetes-upgrade-487000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:09:55.138372   14786 start.go:364] duration metric: took 557.333µs to acquireMachinesLock for "kubernetes-upgrade-487000"
	I0415 05:09:55.138515   14786 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:09:55.138741   14786 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:09:55.148353   14786 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 05:09:55.200938   14786 start.go:159] libmachine.API.Create for "kubernetes-upgrade-487000" (driver="qemu2")
	I0415 05:09:55.200999   14786 client.go:168] LocalClient.Create starting
	I0415 05:09:55.201129   14786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:09:55.201202   14786 main.go:141] libmachine: Decoding PEM data...
	I0415 05:09:55.201228   14786 main.go:141] libmachine: Parsing certificate...
	I0415 05:09:55.201307   14786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:09:55.201358   14786 main.go:141] libmachine: Decoding PEM data...
	I0415 05:09:55.201370   14786 main.go:141] libmachine: Parsing certificate...
	I0415 05:09:55.202029   14786 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:09:55.335723   14786 main.go:141] libmachine: Creating SSH key...
	I0415 05:09:55.421393   14786 main.go:141] libmachine: Creating Disk image...
	I0415 05:09:55.421401   14786 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:09:55.421598   14786 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/disk.qcow2
	I0415 05:09:55.434310   14786 main.go:141] libmachine: STDOUT: 
	I0415 05:09:55.434334   14786 main.go:141] libmachine: STDERR: 
	I0415 05:09:55.434395   14786 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/disk.qcow2 +20000M
	I0415 05:09:55.445795   14786 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:09:55.445810   14786 main.go:141] libmachine: STDERR: 
	I0415 05:09:55.445823   14786 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/disk.qcow2
	I0415 05:09:55.445863   14786 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:09:55.445896   14786 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:61:e0:c0:17:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/disk.qcow2
	I0415 05:09:55.447671   14786 main.go:141] libmachine: STDOUT: 
	I0415 05:09:55.447686   14786 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:09:55.447701   14786 client.go:171] duration metric: took 246.705208ms to LocalClient.Create
	I0415 05:09:57.449920   14786 start.go:128] duration metric: took 2.311210791s to createHost
	I0415 05:09:57.450018   14786 start.go:83] releasing machines lock for "kubernetes-upgrade-487000", held for 2.311716917s
	W0415 05:09:57.450341   14786 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:09:57.459912   14786 out.go:177] 
	W0415 05:09:57.464140   14786 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:09:57.464170   14786 out.go:239] * 
	* 
	W0415 05:09:57.466971   14786 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:09:57.475869   14786 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-487000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-487000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-487000: (1.939971666s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-487000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-487000 status --format={{.Host}}: exit status 7 (56.115917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-487000 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-487000 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.185756833s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-487000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-487000" primary control-plane node in "kubernetes-upgrade-487000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-487000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-487000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:09:59.520304   14815 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:09:59.520431   14815 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:09:59.520434   14815 out.go:304] Setting ErrFile to fd 2...
	I0415 05:09:59.520437   14815 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:09:59.520562   14815 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:09:59.521647   14815 out.go:298] Setting JSON to false
	I0415 05:09:59.537657   14815 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7770,"bootTime":1713175229,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:09:59.537724   14815 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:09:59.542777   14815 out.go:177] * [kubernetes-upgrade-487000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:09:59.550898   14815 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:09:59.553817   14815 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:09:59.550955   14815 notify.go:220] Checking for updates...
	I0415 05:09:59.556841   14815 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:09:59.559784   14815 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:09:59.562806   14815 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:09:59.565817   14815 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:09:59.569147   14815 config.go:182] Loaded profile config "kubernetes-upgrade-487000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0415 05:09:59.569390   14815 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:09:59.572808   14815 out.go:177] * Using the qemu2 driver based on existing profile
	I0415 05:09:59.579796   14815 start.go:297] selected driver: qemu2
	I0415 05:09:59.579805   14815 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:09:59.579857   14815 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:09:59.582210   14815 cni.go:84] Creating CNI manager for ""
	I0415 05:09:59.582228   14815 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:09:59.582256   14815 start.go:340] cluster config:
	{Name:kubernetes-upgrade-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:kubernetes-upgrade-487000 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:09:59.586333   14815 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:09:59.593779   14815 out.go:177] * Starting "kubernetes-upgrade-487000" primary control-plane node in "kubernetes-upgrade-487000" cluster
	I0415 05:09:59.597821   14815 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 05:09:59.597835   14815 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0415 05:09:59.597840   14815 cache.go:56] Caching tarball of preloaded images
	I0415 05:09:59.597890   14815 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:09:59.597894   14815 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on docker
	I0415 05:09:59.597943   14815 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/kubernetes-upgrade-487000/config.json ...
	I0415 05:09:59.598305   14815 start.go:360] acquireMachinesLock for kubernetes-upgrade-487000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:09:59.598332   14815 start.go:364] duration metric: took 21.584µs to acquireMachinesLock for "kubernetes-upgrade-487000"
	I0415 05:09:59.598342   14815 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:09:59.598348   14815 fix.go:54] fixHost starting: 
	I0415 05:09:59.598451   14815 fix.go:112] recreateIfNeeded on kubernetes-upgrade-487000: state=Stopped err=<nil>
	W0415 05:09:59.598458   14815 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:09:59.606842   14815 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-487000" ...
	I0415 05:09:59.610793   14815 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:61:e0:c0:17:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/disk.qcow2
	I0415 05:09:59.612638   14815 main.go:141] libmachine: STDOUT: 
	I0415 05:09:59.612656   14815 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:09:59.612687   14815 fix.go:56] duration metric: took 14.339875ms for fixHost
	I0415 05:09:59.612690   14815 start.go:83] releasing machines lock for "kubernetes-upgrade-487000", held for 14.355042ms
	W0415 05:09:59.612695   14815 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:09:59.612726   14815 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:09:59.612729   14815 start.go:728] Will try again in 5 seconds ...
	I0415 05:10:04.614751   14815 start.go:360] acquireMachinesLock for kubernetes-upgrade-487000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:10:04.615287   14815 start.go:364] duration metric: took 436.25µs to acquireMachinesLock for "kubernetes-upgrade-487000"
	I0415 05:10:04.615550   14815 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:10:04.615573   14815 fix.go:54] fixHost starting: 
	I0415 05:10:04.616325   14815 fix.go:112] recreateIfNeeded on kubernetes-upgrade-487000: state=Stopped err=<nil>
	W0415 05:10:04.616352   14815 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:10:04.625328   14815 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-487000" ...
	I0415 05:10:04.629540   14815 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:61:e0:c0:17:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubernetes-upgrade-487000/disk.qcow2
	I0415 05:10:04.638897   14815 main.go:141] libmachine: STDOUT: 
	I0415 05:10:04.638957   14815 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:10:04.639036   14815 fix.go:56] duration metric: took 23.466625ms for fixHost
	I0415 05:10:04.639054   14815 start.go:83] releasing machines lock for "kubernetes-upgrade-487000", held for 23.646917ms
	W0415 05:10:04.639231   14815 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-487000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-487000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:10:04.647400   14815 out.go:177] 
	W0415 05:10:04.651410   14815 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:10:04.651463   14815 out.go:239] * 
	* 
	W0415 05:10:04.653452   14815 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:10:04.662256   14815 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-487000 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-487000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-487000 version --output=json: exit status 1 (62.295791ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-487000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-04-15 05:10:04.739263 -0700 PDT m=+1007.726415084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-487000 -n kubernetes-upgrade-487000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-487000 -n kubernetes-upgrade-487000: exit status 7 (34.299875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-487000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-487000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-487000
--- FAIL: TestKubernetesUpgrade (17.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (12.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-683000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-683000 --driver=qemu2 : exit status 80 (12.39064625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-683000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-683000" primary control-plane node in "NoKubernetes-683000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-683000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-683000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-683000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-683000 -n NoKubernetes-683000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-683000 -n NoKubernetes-683000: exit status 7 (33.400959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-683000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (12.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-683000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-683000 --no-kubernetes --driver=qemu2 : exit status 80 (5.294341125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-683000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-683000
	* Restarting existing qemu2 VM for "NoKubernetes-683000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-683000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-683000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-683000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-683000 -n NoKubernetes-683000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-683000 -n NoKubernetes-683000: exit status 7 (70.123375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-683000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.37s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.51s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0-beta.0 on darwin (arm64)
- MINIKUBE_LOCATION=18644
- KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3203640896/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.51s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.21s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0-beta.0 on darwin (arm64)
- MINIKUBE_LOCATION=18644
- KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current284421062/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-683000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-683000 --no-kubernetes --driver=qemu2 : exit status 80 (5.249897542s)

                                                
                                                
-- stdout --
	* [NoKubernetes-683000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-683000
	* Restarting existing qemu2 VM for "NoKubernetes-683000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-683000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-683000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-683000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-683000 -n NoKubernetes-683000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-683000 -n NoKubernetes-683000: exit status 7 (72.12575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-683000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-683000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-683000 --driver=qemu2 : exit status 80 (7.534603916s)

                                                
                                                
-- stdout --
	* [NoKubernetes-683000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-683000
	* Restarting existing qemu2 VM for "NoKubernetes-683000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-683000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-683000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-683000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-683000 -n NoKubernetes-683000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-683000 -n NoKubernetes-683000: exit status 7 (50.535875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-683000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (7.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (563.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.4132624878 start -p stopped-upgrade-413000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.4132624878 start -p stopped-upgrade-413000 --memory=2200 --vm-driver=qemu2 : (41.149194041s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.4132624878 -p stopped-upgrade-413000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.4132624878 -p stopped-upgrade-413000 stop: (3.112589292s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-413000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-413000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m39.635844833s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-413000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-413000" primary control-plane node in "stopped-upgrade-413000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-413000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:10:50.161525   14863 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:10:50.161682   14863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:10:50.161687   14863 out.go:304] Setting ErrFile to fd 2...
	I0415 05:10:50.161690   14863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:10:50.161830   14863 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:10:50.163027   14863 out.go:298] Setting JSON to false
	I0415 05:10:50.183381   14863 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7821,"bootTime":1713175229,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:10:50.183467   14863 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:10:50.188005   14863 out.go:177] * [stopped-upgrade-413000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:10:50.195998   14863 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:10:50.196096   14863 notify.go:220] Checking for updates...
	I0415 05:10:50.202873   14863 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:10:50.205952   14863 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:10:50.208949   14863 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:10:50.211919   14863 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:10:50.214903   14863 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:10:50.216313   14863 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:10:50.219855   14863 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0415 05:10:50.223869   14863 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:10:50.227889   14863 out.go:177] * Using the qemu2 driver based on existing profile
	I0415 05:10:50.234903   14863 start.go:297] selected driver: qemu2
	I0415 05:10:50.234909   14863 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52436 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0415 05:10:50.234958   14863 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:10:50.237530   14863 cni.go:84] Creating CNI manager for ""
	I0415 05:10:50.237551   14863 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:10:50.237578   14863 start.go:340] cluster config:
	{Name:stopped-upgrade-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52436 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0415 05:10:50.237634   14863 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:10:50.244890   14863 out.go:177] * Starting "stopped-upgrade-413000" primary control-plane node in "stopped-upgrade-413000" cluster
	I0415 05:10:50.247874   14863 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0415 05:10:50.247889   14863 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0415 05:10:50.247894   14863 cache.go:56] Caching tarball of preloaded images
	I0415 05:10:50.247945   14863 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:10:50.247949   14863 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0415 05:10:50.247987   14863 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/config.json ...
	I0415 05:10:50.248256   14863 start.go:360] acquireMachinesLock for stopped-upgrade-413000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:10:50.248295   14863 start.go:364] duration metric: took 33.666µs to acquireMachinesLock for "stopped-upgrade-413000"
	I0415 05:10:50.248303   14863 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:10:50.248309   14863 fix.go:54] fixHost starting: 
	I0415 05:10:50.248413   14863 fix.go:112] recreateIfNeeded on stopped-upgrade-413000: state=Stopped err=<nil>
	W0415 05:10:50.248421   14863 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:10:50.254836   14863 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-413000" ...
	I0415 05:10:50.258959   14863 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/stopped-upgrade-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/stopped-upgrade-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/stopped-upgrade-413000/qemu.pid -nic user,model=virtio,hostfwd=tcp::52403-:22,hostfwd=tcp::52404-:2376,hostname=stopped-upgrade-413000 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/stopped-upgrade-413000/disk.qcow2
	I0415 05:10:50.306142   14863 main.go:141] libmachine: STDOUT: 
	I0415 05:10:50.306173   14863 main.go:141] libmachine: STDERR: 
	I0415 05:10:50.306183   14863 main.go:141] libmachine: Waiting for VM to start (ssh -p 52403 docker@127.0.0.1)...
	I0415 05:11:09.888607   14863 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/config.json ...
	I0415 05:11:09.889136   14863 machine.go:94] provisionDockerMachine start ...
	I0415 05:11:09.889639   14863 main.go:141] libmachine: Using SSH client type: native
	I0415 05:11:09.889904   14863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10299dc80] 0x1029a04e0 <nil>  [] 0s} localhost 52403 <nil> <nil>}
	I0415 05:11:09.889919   14863 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 05:11:09.957887   14863 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0415 05:11:09.957918   14863 buildroot.go:166] provisioning hostname "stopped-upgrade-413000"
	I0415 05:11:09.958002   14863 main.go:141] libmachine: Using SSH client type: native
	I0415 05:11:09.958188   14863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10299dc80] 0x1029a04e0 <nil>  [] 0s} localhost 52403 <nil> <nil>}
	I0415 05:11:09.958196   14863 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-413000 && echo "stopped-upgrade-413000" | sudo tee /etc/hostname
	I0415 05:11:10.024814   14863 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-413000
	
	I0415 05:11:10.024867   14863 main.go:141] libmachine: Using SSH client type: native
	I0415 05:11:10.024994   14863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10299dc80] 0x1029a04e0 <nil>  [] 0s} localhost 52403 <nil> <nil>}
	I0415 05:11:10.025004   14863 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-413000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-413000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-413000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 05:11:10.086216   14863 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 05:11:10.086229   14863 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18644-12307/.minikube CaCertPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18644-12307/.minikube}
	I0415 05:11:10.086237   14863 buildroot.go:174] setting up certificates
	I0415 05:11:10.086242   14863 provision.go:84] configureAuth start
	I0415 05:11:10.086246   14863 provision.go:143] copyHostCerts
	I0415 05:11:10.086351   14863 exec_runner.go:144] found /Users/jenkins/minikube-integration/18644-12307/.minikube/key.pem, removing ...
	I0415 05:11:10.086359   14863 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18644-12307/.minikube/key.pem
	I0415 05:11:10.086468   14863 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18644-12307/.minikube/key.pem (1675 bytes)
	I0415 05:11:10.086658   14863 exec_runner.go:144] found /Users/jenkins/minikube-integration/18644-12307/.minikube/ca.pem, removing ...
	I0415 05:11:10.086663   14863 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18644-12307/.minikube/ca.pem
	I0415 05:11:10.086719   14863 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18644-12307/.minikube/ca.pem (1078 bytes)
	I0415 05:11:10.086826   14863 exec_runner.go:144] found /Users/jenkins/minikube-integration/18644-12307/.minikube/cert.pem, removing ...
	I0415 05:11:10.086831   14863 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18644-12307/.minikube/cert.pem
	I0415 05:11:10.086887   14863 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18644-12307/.minikube/cert.pem (1123 bytes)
	I0415 05:11:10.086983   14863 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-413000 san=[127.0.0.1 localhost minikube stopped-upgrade-413000]
	I0415 05:11:10.244448   14863 provision.go:177] copyRemoteCerts
	I0415 05:11:10.244504   14863 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 05:11:10.244514   14863 sshutil.go:53] new ssh client: &{IP:localhost Port:52403 SSHKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	I0415 05:11:10.277731   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0415 05:11:10.285831   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0415 05:11:10.294106   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0415 05:11:10.302132   14863 provision.go:87] duration metric: took 215.884458ms to configureAuth
	I0415 05:11:10.302144   14863 buildroot.go:189] setting minikube options for container-runtime
	I0415 05:11:10.302286   14863 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:11:10.302324   14863 main.go:141] libmachine: Using SSH client type: native
	I0415 05:11:10.302420   14863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10299dc80] 0x1029a04e0 <nil>  [] 0s} localhost 52403 <nil> <nil>}
	I0415 05:11:10.302426   14863 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 05:11:10.364345   14863 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0415 05:11:10.364359   14863 buildroot.go:70] root file system type: tmpfs
	I0415 05:11:10.364415   14863 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 05:11:10.364488   14863 main.go:141] libmachine: Using SSH client type: native
	I0415 05:11:10.364617   14863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10299dc80] 0x1029a04e0 <nil>  [] 0s} localhost 52403 <nil> <nil>}
	I0415 05:11:10.364652   14863 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 05:11:10.427080   14863 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 05:11:10.427141   14863 main.go:141] libmachine: Using SSH client type: native
	I0415 05:11:10.427260   14863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10299dc80] 0x1029a04e0 <nil>  [] 0s} localhost 52403 <nil> <nil>}
	I0415 05:11:10.427268   14863 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 05:11:10.766343   14863 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0415 05:11:10.766359   14863 machine.go:97] duration metric: took 877.249833ms to provisionDockerMachine
	I0415 05:11:10.766367   14863 start.go:293] postStartSetup for "stopped-upgrade-413000" (driver="qemu2")
	I0415 05:11:10.766374   14863 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 05:11:10.766435   14863 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 05:11:10.766445   14863 sshutil.go:53] new ssh client: &{IP:localhost Port:52403 SSHKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	I0415 05:11:10.797630   14863 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 05:11:10.799058   14863 info.go:137] Remote host: Buildroot 2021.02.12
	I0415 05:11:10.799066   14863 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18644-12307/.minikube/addons for local assets ...
	I0415 05:11:10.799158   14863 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18644-12307/.minikube/files for local assets ...
	I0415 05:11:10.799281   14863 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18644-12307/.minikube/files/etc/ssl/certs/127152.pem -> 127152.pem in /etc/ssl/certs
	I0415 05:11:10.799408   14863 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 05:11:10.802201   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/files/etc/ssl/certs/127152.pem --> /etc/ssl/certs/127152.pem (1708 bytes)
	I0415 05:11:10.809385   14863 start.go:296] duration metric: took 43.015209ms for postStartSetup
	I0415 05:11:10.809399   14863 fix.go:56] duration metric: took 20.561946916s for fixHost
	I0415 05:11:10.809439   14863 main.go:141] libmachine: Using SSH client type: native
	I0415 05:11:10.809537   14863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10299dc80] 0x1029a04e0 <nil>  [] 0s} localhost 52403 <nil> <nil>}
	I0415 05:11:10.809542   14863 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0415 05:11:10.867125   14863 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713183070.489905420
	
	I0415 05:11:10.867133   14863 fix.go:216] guest clock: 1713183070.489905420
	I0415 05:11:10.867138   14863 fix.go:229] Guest: 2024-04-15 05:11:10.48990542 -0700 PDT Remote: 2024-04-15 05:11:10.809401 -0700 PDT m=+20.683534668 (delta=-319.49558ms)
	I0415 05:11:10.867150   14863 fix.go:200] guest clock delta is within tolerance: -319.49558ms
	I0415 05:11:10.867152   14863 start.go:83] releasing machines lock for "stopped-upgrade-413000", held for 20.619710708s
	I0415 05:11:10.867222   14863 ssh_runner.go:195] Run: cat /version.json
	I0415 05:11:10.867224   14863 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 05:11:10.867230   14863 sshutil.go:53] new ssh client: &{IP:localhost Port:52403 SSHKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	I0415 05:11:10.867241   14863 sshutil.go:53] new ssh client: &{IP:localhost Port:52403 SSHKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	W0415 05:11:10.867899   14863 sshutil.go:64] dial failure (will retry): dial tcp [::1]:52403: connect: connection refused
	I0415 05:11:10.867923   14863 retry.go:31] will retry after 144.831337ms: dial tcp [::1]:52403: connect: connection refused
	W0415 05:11:10.899128   14863 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0415 05:11:10.899187   14863 ssh_runner.go:195] Run: systemctl --version
	I0415 05:11:10.901621   14863 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 05:11:10.903077   14863 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 05:11:10.903110   14863 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0415 05:11:10.906095   14863 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0415 05:11:10.910694   14863 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 05:11:10.910701   14863 start.go:494] detecting cgroup driver to use...
	I0415 05:11:10.910773   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 05:11:10.916702   14863 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0415 05:11:10.919506   14863 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 05:11:10.922434   14863 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 05:11:10.922466   14863 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 05:11:10.925486   14863 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 05:11:10.928628   14863 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 05:11:10.931352   14863 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 05:11:10.934288   14863 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 05:11:10.937438   14863 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 05:11:10.940523   14863 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 05:11:10.943170   14863 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 05:11:10.946218   14863 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 05:11:10.949140   14863 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 05:11:10.951642   14863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 05:11:11.020550   14863 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 05:11:11.031013   14863 start.go:494] detecting cgroup driver to use...
	I0415 05:11:11.031086   14863 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 05:11:11.043238   14863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 05:11:11.088543   14863 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 05:11:11.095069   14863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 05:11:11.099582   14863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 05:11:11.104203   14863 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 05:11:11.149760   14863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 05:11:11.155348   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 05:11:11.160987   14863 ssh_runner.go:195] Run: which cri-dockerd
	I0415 05:11:11.162152   14863 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 05:11:11.165050   14863 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 05:11:11.170024   14863 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 05:11:11.231397   14863 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 05:11:11.292022   14863 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 05:11:11.292085   14863 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 05:11:11.297429   14863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 05:11:11.373638   14863 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 05:11:12.483764   14863 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.110144625s)
	I0415 05:11:12.483837   14863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0415 05:11:12.488658   14863 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0415 05:11:12.494981   14863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 05:11:12.499955   14863 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0415 05:11:12.576851   14863 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0415 05:11:12.652972   14863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 05:11:12.728330   14863 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0415 05:11:12.734244   14863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 05:11:12.739206   14863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 05:11:12.799167   14863 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0415 05:11:12.839855   14863 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0415 05:11:12.841029   14863 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0415 05:11:12.842926   14863 start.go:562] Will wait 60s for crictl version
	I0415 05:11:12.842959   14863 ssh_runner.go:195] Run: which crictl
	I0415 05:11:12.844213   14863 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 05:11:12.859065   14863 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0415 05:11:12.859151   14863 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 05:11:12.875657   14863 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 05:11:12.895995   14863 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0415 05:11:12.896068   14863 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0415 05:11:12.897290   14863 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 05:11:12.900828   14863 kubeadm.go:877] updating cluster {Name:stopped-upgrade-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52436 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0415 05:11:12.900870   14863 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0415 05:11:12.900942   14863 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 05:11:12.911580   14863 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 05:11:12.911588   14863 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0415 05:11:12.911639   14863 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 05:11:12.914559   14863 ssh_runner.go:195] Run: which lz4
	I0415 05:11:12.915924   14863 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0415 05:11:12.917108   14863 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0415 05:11:12.917130   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0415 05:11:13.613022   14863 docker.go:649] duration metric: took 697.160417ms to copy over tarball
	I0415 05:11:13.613093   14863 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0415 05:11:14.869890   14863 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.256833583s)
	I0415 05:11:14.869903   14863 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0415 05:11:14.885862   14863 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 05:11:14.889653   14863 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0415 05:11:14.894663   14863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 05:11:14.971969   14863 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 05:11:16.716827   14863 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.744911667s)
	I0415 05:11:16.716924   14863 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 05:11:16.730940   14863 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 05:11:16.730948   14863 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0415 05:11:16.730954   14863 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0415 05:11:16.737231   14863 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0415 05:11:16.737316   14863 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0415 05:11:16.737373   14863 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0415 05:11:16.737413   14863 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0415 05:11:16.737485   14863 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0415 05:11:16.737642   14863 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0415 05:11:16.737740   14863 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 05:11:16.737791   14863 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0415 05:11:16.747545   14863 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0415 05:11:16.747601   14863 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0415 05:11:16.747745   14863 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 05:11:16.748225   14863 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0415 05:11:16.748455   14863 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0415 05:11:16.748480   14863 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0415 05:11:16.748468   14863 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0415 05:11:16.748507   14863 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0415 05:11:17.155198   14863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0415 05:11:17.165722   14863 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0415 05:11:17.165747   14863 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0415 05:11:17.165803   14863 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0415 05:11:17.175875   14863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0415 05:11:17.175980   14863 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0415 05:11:17.177568   14863 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0415 05:11:17.177577   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0415 05:11:17.185152   14863 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0415 05:11:17.185164   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0415 05:11:17.196795   14863 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0415 05:11:17.196933   14863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0415 05:11:17.200146   14863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0415 05:11:17.220978   14863 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0415 05:11:17.221147   14863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0415 05:11:17.221167   14863 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0415 05:11:17.221181   14863 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0415 05:11:17.221207   14863 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0415 05:11:17.221839   14863 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0415 05:11:17.221858   14863 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0415 05:11:17.221883   14863 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0415 05:11:17.233772   14863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0415 05:11:17.252191   14863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0415 05:11:17.252315   14863 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0415 05:11:17.252317   14863 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0415 05:11:17.252333   14863 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0415 05:11:17.252370   14863 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0415 05:11:17.257605   14863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0415 05:11:17.269461   14863 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0415 05:11:17.269480   14863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0415 05:11:17.269482   14863 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0415 05:11:17.269461   14863 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0415 05:11:17.269533   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0415 05:11:17.269543   14863 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0415 05:11:17.287953   14863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0415 05:11:17.288086   14863 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0415 05:11:17.290212   14863 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0415 05:11:17.290247   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0415 05:11:17.292080   14863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0415 05:11:17.292696   14863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0415 05:11:17.331459   14863 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0415 05:11:17.331504   14863 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0415 05:11:17.331620   14863 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0415 05:11:17.339372   14863 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0415 05:11:17.339397   14863 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0415 05:11:17.339463   14863 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0415 05:11:17.357542   14863 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0415 05:11:17.357568   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0415 05:11:17.382839   14863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0415 05:11:17.388095   14863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0415 05:11:17.484126   14863 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0415 05:11:17.554179   14863 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0415 05:11:17.554193   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0415 05:11:17.569652   14863 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0415 05:11:17.569773   14863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 05:11:17.705703   14863 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0415 05:11:17.705737   14863 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0415 05:11:17.705758   14863 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 05:11:17.705831   14863 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 05:11:17.719819   14863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0415 05:11:17.719940   14863 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0415 05:11:17.721447   14863 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0415 05:11:17.721461   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0415 05:11:17.749698   14863 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0415 05:11:17.749712   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0415 05:11:17.989166   14863 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0415 05:11:17.989205   14863 cache_images.go:92] duration metric: took 1.258292833s to LoadCachedImages
	W0415 05:11:17.989252   14863 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0415 05:11:17.989257   14863 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0415 05:11:17.989307   14863 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-413000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 05:11:17.989369   14863 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0415 05:11:18.003219   14863 cni.go:84] Creating CNI manager for ""
	I0415 05:11:18.003231   14863 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:11:18.003240   14863 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 05:11:18.003248   14863 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-413000 NodeName:stopped-upgrade-413000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 05:11:18.003318   14863 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-413000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 05:11:18.003380   14863 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0415 05:11:18.006331   14863 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 05:11:18.006367   14863 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0415 05:11:18.009373   14863 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0415 05:11:18.014360   14863 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 05:11:18.019350   14863 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0415 05:11:18.024803   14863 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0415 05:11:18.026123   14863 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 05:11:18.030098   14863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 05:11:18.113806   14863 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 05:11:18.119634   14863 certs.go:68] Setting up /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000 for IP: 10.0.2.15
	I0415 05:11:18.119645   14863 certs.go:194] generating shared ca certs ...
	I0415 05:11:18.119671   14863 certs.go:226] acquiring lock for ca certs: {Name:mka361b510efa9f3b096790ec7d465ec93fc1994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:11:18.120520   14863 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/ca.key
	I0415 05:11:18.120678   14863 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/proxy-client-ca.key
	I0415 05:11:18.120685   14863 certs.go:256] generating profile certs ...
	I0415 05:11:18.120912   14863 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/client.key
	I0415 05:11:18.120928   14863 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.key.19a19657
	I0415 05:11:18.120941   14863 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.crt.19a19657 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0415 05:11:18.158684   14863 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.crt.19a19657 ...
	I0415 05:11:18.158698   14863 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.crt.19a19657: {Name:mk1cb4d2ae939b15cf618866d7fdcae7adbe5314 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:11:18.159622   14863 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.key.19a19657 ...
	I0415 05:11:18.159629   14863 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.key.19a19657: {Name:mkeff21d24c65d3e56ffa3c461d20fd8a7672579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:11:18.159767   14863 certs.go:381] copying /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.crt.19a19657 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.crt
	I0415 05:11:18.159903   14863 certs.go:385] copying /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.key.19a19657 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.key
	I0415 05:11:18.160139   14863 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/proxy-client.key
	I0415 05:11:18.160292   14863 certs.go:484] found cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/12715.pem (1338 bytes)
	W0415 05:11:18.160458   14863 certs.go:480] ignoring /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/12715_empty.pem, impossibly tiny 0 bytes
	I0415 05:11:18.160467   14863 certs.go:484] found cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca-key.pem (1675 bytes)
	I0415 05:11:18.160491   14863 certs.go:484] found cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem (1078 bytes)
	I0415 05:11:18.160511   14863 certs.go:484] found cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem (1123 bytes)
	I0415 05:11:18.160535   14863 certs.go:484] found cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/key.pem (1675 bytes)
	I0415 05:11:18.160581   14863 certs.go:484] found cert: /Users/jenkins/minikube-integration/18644-12307/.minikube/files/etc/ssl/certs/127152.pem (1708 bytes)
	I0415 05:11:18.160917   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 05:11:18.169308   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 05:11:18.176629   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 05:11:18.183642   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 05:11:18.190168   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0415 05:11:18.197133   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0415 05:11:18.203567   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 05:11:18.210108   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0415 05:11:18.216879   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/files/etc/ssl/certs/127152.pem --> /usr/share/ca-certificates/127152.pem (1708 bytes)
	I0415 05:11:18.223018   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 05:11:18.230028   14863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/12715.pem --> /usr/share/ca-certificates/12715.pem (1338 bytes)
	I0415 05:11:18.237029   14863 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 05:11:18.242399   14863 ssh_runner.go:195] Run: openssl version
	I0415 05:11:18.244388   14863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/127152.pem && ln -fs /usr/share/ca-certificates/127152.pem /etc/ssl/certs/127152.pem"
	I0415 05:11:18.247059   14863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127152.pem
	I0415 05:11:18.248393   14863 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 11:54 /usr/share/ca-certificates/127152.pem
	I0415 05:11:18.248413   14863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127152.pem
	I0415 05:11:18.250056   14863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/127152.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 05:11:18.253392   14863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 05:11:18.256435   14863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 05:11:18.257778   14863 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 12:07 /usr/share/ca-certificates/minikubeCA.pem
	I0415 05:11:18.257795   14863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 05:11:18.259583   14863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 05:11:18.262432   14863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12715.pem && ln -fs /usr/share/ca-certificates/12715.pem /etc/ssl/certs/12715.pem"
	I0415 05:11:18.265636   14863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12715.pem
	I0415 05:11:18.267042   14863 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 11:54 /usr/share/ca-certificates/12715.pem
	I0415 05:11:18.267070   14863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12715.pem
	I0415 05:11:18.268765   14863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12715.pem /etc/ssl/certs/51391683.0"
	I0415 05:11:18.271470   14863 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 05:11:18.272835   14863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0415 05:11:18.274849   14863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0415 05:11:18.276704   14863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0415 05:11:18.278682   14863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0415 05:11:18.280408   14863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0415 05:11:18.282034   14863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0415 05:11:18.283957   14863 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52436 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0415 05:11:18.284034   14863 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 05:11:18.294730   14863 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0415 05:11:18.297983   14863 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0415 05:11:18.297990   14863 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0415 05:11:18.297993   14863 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0415 05:11:18.298026   14863 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0415 05:11:18.301292   14863 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0415 05:11:18.301705   14863 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-413000" does not appear in /Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:11:18.301804   14863 kubeconfig.go:62] /Users/jenkins/minikube-integration/18644-12307/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-413000" cluster setting kubeconfig missing "stopped-upgrade-413000" context setting]
	I0415 05:11:18.302007   14863 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/kubeconfig: {Name:mk14f6faef80a57910c6a54d13a440297d06327c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:11:18.302449   14863 kapi.go:59] client config for stopped-upgrade-413000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/client.key", CAFile:"/Users/jenkins/minikube-integration/18644-12307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103c93c80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 05:11:18.302887   14863 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0415 05:11:18.305965   14863 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-413000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0415 05:11:18.305971   14863 kubeadm.go:1154] stopping kube-system containers ...
	I0415 05:11:18.306022   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 05:11:18.316494   14863 docker.go:483] Stopping containers: [0b788794f1b5 c93d5a242d94 f38c3f6989c2 e499c934f641 27f751bbc55d b26e657bb402 e6cc7832ba20 90ef6e31f982]
	I0415 05:11:18.316570   14863 ssh_runner.go:195] Run: docker stop 0b788794f1b5 c93d5a242d94 f38c3f6989c2 e499c934f641 27f751bbc55d b26e657bb402 e6cc7832ba20 90ef6e31f982
	I0415 05:11:18.327020   14863 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0415 05:11:18.332856   14863 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 05:11:18.335646   14863 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 05:11:18.335656   14863 kubeadm.go:156] found existing configuration files:
	
	I0415 05:11:18.335686   14863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/admin.conf
	I0415 05:11:18.338156   14863 kubeadm.go:162] "https://control-plane.minikube.internal:52436" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 05:11:18.338186   14863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 05:11:18.341225   14863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/kubelet.conf
	I0415 05:11:18.343953   14863 kubeadm.go:162] "https://control-plane.minikube.internal:52436" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 05:11:18.343973   14863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 05:11:18.346360   14863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/controller-manager.conf
	I0415 05:11:18.349411   14863 kubeadm.go:162] "https://control-plane.minikube.internal:52436" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 05:11:18.349443   14863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 05:11:18.352607   14863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/scheduler.conf
	I0415 05:11:18.355135   14863 kubeadm.go:162] "https://control-plane.minikube.internal:52436" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 05:11:18.355165   14863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 05:11:18.358158   14863 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 05:11:18.361984   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0415 05:11:18.385456   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0415 05:11:18.886509   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0415 05:11:19.011053   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0415 05:11:19.037622   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0415 05:11:19.066771   14863 api_server.go:52] waiting for apiserver process to appear ...
	I0415 05:11:19.066860   14863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 05:11:19.568985   14863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 05:11:20.068869   14863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 05:11:20.072987   14863 api_server.go:72] duration metric: took 1.006260083s to wait for apiserver process to appear ...
	I0415 05:11:20.072998   14863 api_server.go:88] waiting for apiserver healthz status ...
	I0415 05:11:20.073006   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:25.073910   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:11:25.073934   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:30.074704   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:11:30.074742   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:35.074880   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:11:35.074959   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:40.075276   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:11:40.075329   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:45.075782   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:11:45.075812   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:50.076368   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:11:50.076505   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:11:55.077677   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:11:55.077749   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:00.079075   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:00.079100   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:05.080583   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:05.080652   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:10.082638   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:10.082678   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:15.084698   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:15.084720   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:20.086731   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:20.086849   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:12:20.098618   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:12:20.098714   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:12:20.109744   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:12:20.109843   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:12:20.120258   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:12:20.120340   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:12:20.130834   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:12:20.130918   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:12:20.140931   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:12:20.141013   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:12:20.151445   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:12:20.151519   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:12:20.161695   14863 logs.go:276] 0 containers: []
	W0415 05:12:20.162400   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:12:20.162472   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:12:20.172210   14863 logs.go:276] 0 containers: []
	W0415 05:12:20.172222   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:12:20.172230   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:12:20.172234   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:12:20.183813   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:12:20.183823   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:12:20.207677   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:12:20.207687   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:12:20.219285   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:12:20.219294   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:12:20.322581   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:12:20.322593   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:12:20.336537   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:12:20.336548   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:12:20.353949   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:12:20.353960   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:12:20.390947   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:12:20.390957   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:12:20.406423   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:12:20.406434   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:12:20.422631   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:12:20.422641   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:12:20.436057   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:12:20.436069   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:12:20.453033   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:12:20.453044   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:12:20.475371   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:12:20.475382   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:12:20.479642   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:12:20.479649   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:12:20.496626   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:12:20.496639   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:12:23.027530   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:28.029531   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:28.029648   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:12:28.041144   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:12:28.041238   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:12:28.052431   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:12:28.052513   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:12:28.063055   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:12:28.063132   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:12:28.077453   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:12:28.077543   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:12:28.087891   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:12:28.087979   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:12:28.102784   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:12:28.102863   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:12:28.113158   14863 logs.go:276] 0 containers: []
	W0415 05:12:28.113172   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:12:28.113251   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:12:28.123652   14863 logs.go:276] 0 containers: []
	W0415 05:12:28.123669   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:12:28.123680   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:12:28.123686   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:12:28.128014   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:12:28.128028   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:12:28.166215   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:12:28.166225   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:12:28.190689   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:12:28.190698   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:12:28.227494   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:12:28.227503   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:12:28.249524   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:12:28.249533   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:12:28.282732   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:12:28.282743   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:12:28.297643   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:12:28.297654   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:12:28.313318   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:12:28.313328   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:12:28.327583   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:12:28.327595   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:12:28.343989   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:12:28.344000   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:12:28.361379   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:12:28.361388   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:12:28.374717   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:12:28.374727   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:12:28.385832   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:12:28.385845   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:12:28.404962   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:12:28.404972   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:12:30.918976   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:35.921306   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:35.921715   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:12:35.959757   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:12:35.959924   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:12:35.977961   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:12:35.978062   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:12:35.992247   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:12:35.992342   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:12:36.005165   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:12:36.005243   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:12:36.015791   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:12:36.015865   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:12:36.026912   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:12:36.026997   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:12:36.036679   14863 logs.go:276] 0 containers: []
	W0415 05:12:36.036690   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:12:36.036762   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:12:36.046909   14863 logs.go:276] 0 containers: []
	W0415 05:12:36.046920   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:12:36.046928   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:12:36.046933   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:12:36.087078   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:12:36.087091   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:12:36.101139   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:12:36.101151   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:12:36.105263   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:12:36.105270   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:12:36.116464   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:12:36.116475   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:12:36.128255   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:12:36.128268   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:12:36.148689   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:12:36.148702   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:12:36.174164   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:12:36.174184   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:12:36.212167   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:12:36.212178   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:12:36.239924   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:12:36.239935   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:12:36.254469   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:12:36.254483   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:12:36.265879   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:12:36.265889   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:12:36.283396   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:12:36.283405   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:12:36.297191   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:12:36.297207   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:12:36.312615   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:12:36.312626   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:12:38.831861   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:43.834141   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:43.834278   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:12:43.846270   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:12:43.846350   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:12:43.857047   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:12:43.857140   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:12:43.867483   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:12:43.867568   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:12:43.877802   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:12:43.877874   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:12:43.888052   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:12:43.888133   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:12:43.898596   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:12:43.898676   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:12:43.912096   14863 logs.go:276] 0 containers: []
	W0415 05:12:43.912107   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:12:43.912172   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:12:43.926570   14863 logs.go:276] 0 containers: []
	W0415 05:12:43.926582   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:12:43.926590   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:12:43.926596   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:12:43.962480   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:12:43.962491   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:12:43.976716   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:12:43.976728   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:12:44.016310   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:12:44.016319   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:12:44.020472   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:12:44.020478   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:12:44.031813   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:12:44.031824   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:12:44.044027   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:12:44.044036   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:12:44.067798   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:12:44.067809   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:12:44.079533   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:12:44.079544   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:12:44.096138   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:12:44.096149   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:12:44.116380   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:12:44.116391   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:12:44.145466   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:12:44.145476   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:12:44.159846   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:12:44.159860   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:12:44.172248   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:12:44.172260   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:12:44.191109   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:12:44.191121   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:12:46.712293   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:51.713565   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:51.713881   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:12:51.740810   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:12:51.740962   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:12:51.758276   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:12:51.758366   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:12:51.771934   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:12:51.772022   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:12:51.783784   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:12:51.783869   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:12:51.798675   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:12:51.798759   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:12:51.814559   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:12:51.814653   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:12:51.824711   14863 logs.go:276] 0 containers: []
	W0415 05:12:51.824725   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:12:51.824793   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:12:51.835108   14863 logs.go:276] 0 containers: []
	W0415 05:12:51.835119   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:12:51.835126   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:12:51.835132   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:12:51.839078   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:12:51.839084   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:12:51.863855   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:12:51.863867   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:12:51.878851   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:12:51.878860   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:12:51.901371   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:12:51.901380   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:12:51.940950   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:12:51.940964   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:12:51.957126   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:12:51.957139   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:12:51.970717   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:12:51.970728   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:12:51.995488   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:12:51.995498   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:12:52.033297   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:12:52.033312   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:12:52.051809   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:12:52.051820   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:12:52.066429   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:12:52.066439   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:12:52.081273   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:12:52.081285   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:12:52.092578   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:12:52.092591   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:12:52.112305   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:12:52.112316   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:12:54.626202   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:12:59.628466   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:12:59.628731   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:12:59.659897   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:12:59.660033   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:12:59.675315   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:12:59.675414   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:12:59.687666   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:12:59.687746   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:12:59.698145   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:12:59.698232   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:12:59.708981   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:12:59.709055   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:12:59.719707   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:12:59.719796   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:12:59.730144   14863 logs.go:276] 0 containers: []
	W0415 05:12:59.730155   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:12:59.730238   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:12:59.743933   14863 logs.go:276] 0 containers: []
	W0415 05:12:59.743946   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:12:59.743954   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:12:59.743960   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:12:59.748382   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:12:59.748389   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:12:59.765924   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:12:59.765934   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:12:59.790383   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:12:59.790400   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:12:59.804538   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:12:59.804555   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:12:59.816110   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:12:59.816122   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:12:59.834637   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:12:59.834647   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:12:59.846654   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:12:59.846665   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:12:59.882532   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:12:59.882543   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:12:59.896977   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:12:59.896987   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:12:59.908634   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:12:59.908645   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:12:59.920798   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:12:59.920813   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:12:59.958774   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:12:59.958785   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:12:59.976783   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:12:59.976794   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:12:59.996397   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:12:59.996409   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:02.522029   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:13:07.524180   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:13:07.524390   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:13:07.540574   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:13:07.540689   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:13:07.553308   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:13:07.553397   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:13:07.571756   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:13:07.571838   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:13:07.582348   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:13:07.582434   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:13:07.594405   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:13:07.594479   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:13:07.608068   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:13:07.608147   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:13:07.618220   14863 logs.go:276] 0 containers: []
	W0415 05:13:07.618231   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:13:07.618300   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:13:07.629912   14863 logs.go:276] 0 containers: []
	W0415 05:13:07.629926   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:13:07.629937   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:13:07.629947   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:13:07.646495   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:13:07.646505   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:13:07.680399   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:13:07.680411   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:13:07.692460   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:13:07.692471   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:13:07.713512   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:13:07.713521   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:13:07.725044   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:13:07.725055   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:13:07.763874   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:13:07.763885   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:13:07.789875   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:13:07.789885   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:13:07.801423   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:13:07.801435   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:13:07.816356   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:13:07.816367   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:07.840746   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:13:07.840760   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:13:07.846322   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:13:07.846332   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:13:07.860269   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:13:07.860280   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:13:07.874282   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:13:07.874293   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:13:07.886038   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:13:07.886048   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:13:10.406689   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:13:15.407676   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:13:15.407815   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:13:15.426992   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:13:15.427096   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:13:15.441572   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:13:15.441662   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:13:15.454173   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:13:15.454256   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:13:15.464726   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:13:15.464798   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:13:15.478818   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:13:15.478895   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:13:15.489008   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:13:15.489081   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:13:15.499430   14863 logs.go:276] 0 containers: []
	W0415 05:13:15.499441   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:13:15.499501   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:13:15.509316   14863 logs.go:276] 0 containers: []
	W0415 05:13:15.509329   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:13:15.509337   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:13:15.509342   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:13:15.513586   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:13:15.513593   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:13:15.528356   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:13:15.528367   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:13:15.546274   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:13:15.546284   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:15.570452   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:13:15.570459   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:13:15.603436   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:13:15.603447   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:13:15.618304   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:13:15.618315   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:13:15.655355   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:13:15.655366   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:13:15.669031   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:13:15.669042   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:13:15.694518   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:13:15.694529   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:13:15.706048   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:13:15.706060   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:13:15.717799   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:13:15.717818   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:13:15.737028   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:13:15.737039   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:13:15.749568   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:13:15.749578   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:13:15.764157   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:13:15.764167   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:13:18.278397   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:13:23.280510   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:13:23.280812   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:13:23.307798   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:13:23.307947   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:13:23.325638   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:13:23.325746   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:13:23.340693   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:13:23.340783   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:13:23.351965   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:13:23.352056   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:13:23.362645   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:13:23.362725   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:13:23.373914   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:13:23.373993   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:13:23.384391   14863 logs.go:276] 0 containers: []
	W0415 05:13:23.384405   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:13:23.384472   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:13:23.394396   14863 logs.go:276] 0 containers: []
	W0415 05:13:23.394412   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:13:23.394419   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:13:23.394424   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:13:23.406476   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:13:23.406487   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:23.430418   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:13:23.430431   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:13:23.442939   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:13:23.442954   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:13:23.447352   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:13:23.447357   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:13:23.480754   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:13:23.480766   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:13:23.492544   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:13:23.492559   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:13:23.504197   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:13:23.504207   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:13:23.519875   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:13:23.519888   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:13:23.537444   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:13:23.537453   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:13:23.557616   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:13:23.557625   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:13:23.586483   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:13:23.586494   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:13:23.603647   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:13:23.603657   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:13:23.640157   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:13:23.640207   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:13:23.655069   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:13:23.655080   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:13:26.176684   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:13:31.178912   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:13:31.179081   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:13:31.199374   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:13:31.199471   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:13:31.214437   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:13:31.214531   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:13:31.226816   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:13:31.226895   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:13:31.238492   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:13:31.238575   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:13:31.249596   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:13:31.249679   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:13:31.261282   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:13:31.261365   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:13:31.272175   14863 logs.go:276] 0 containers: []
	W0415 05:13:31.272189   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:13:31.272259   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:13:31.282965   14863 logs.go:276] 0 containers: []
	W0415 05:13:31.282977   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:13:31.282985   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:13:31.282992   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:13:31.318834   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:13:31.318846   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:13:31.336952   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:13:31.336962   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:13:31.349226   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:13:31.349239   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:13:31.373056   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:13:31.373065   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:13:31.397556   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:13:31.397567   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:13:31.413751   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:13:31.413762   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:13:31.432010   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:13:31.432022   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:13:31.436287   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:13:31.436295   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:13:31.448051   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:13:31.448063   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:13:31.460160   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:13:31.460171   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:13:31.471759   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:13:31.471770   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:31.494503   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:13:31.494512   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:13:31.531633   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:13:31.531643   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:13:31.545331   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:13:31.545341   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:13:34.060408   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:13:39.062442   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:13:39.062626   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:13:39.079743   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:13:39.079832   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:13:39.096478   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:13:39.096565   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:13:39.115150   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:13:39.115235   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:13:39.126331   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:13:39.126415   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:13:39.136978   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:13:39.137063   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:13:39.147219   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:13:39.147299   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:13:39.157784   14863 logs.go:276] 0 containers: []
	W0415 05:13:39.157797   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:13:39.157858   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:13:39.171783   14863 logs.go:276] 0 containers: []
	W0415 05:13:39.171794   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:13:39.171802   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:13:39.171807   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:13:39.185310   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:13:39.185320   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:13:39.202732   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:13:39.202743   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:39.226428   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:13:39.226440   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:13:39.264628   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:13:39.264644   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:13:39.290018   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:13:39.290031   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:13:39.304127   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:13:39.304138   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:13:39.322834   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:13:39.322844   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:13:39.359739   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:13:39.359749   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:13:39.373634   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:13:39.373644   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:13:39.385222   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:13:39.385232   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:13:39.400031   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:13:39.400042   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:13:39.404011   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:13:39.404017   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:13:39.418013   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:13:39.418024   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:13:39.430009   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:13:39.430022   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:13:41.944310   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:13:46.946531   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:13:46.946919   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:13:46.988054   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:13:46.988220   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:13:47.010650   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:13:47.010760   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:13:47.025699   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:13:47.025792   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:13:47.038821   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:13:47.038907   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:13:47.049568   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:13:47.049654   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:13:47.059661   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:13:47.059733   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:13:47.069529   14863 logs.go:276] 0 containers: []
	W0415 05:13:47.069541   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:13:47.069608   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:13:47.080270   14863 logs.go:276] 0 containers: []
	W0415 05:13:47.080281   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:13:47.080288   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:13:47.080293   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:13:47.084494   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:13:47.084500   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:13:47.121548   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:13:47.121559   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:13:47.147430   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:13:47.147442   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:47.171077   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:13:47.171086   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:13:47.182273   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:13:47.182285   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:13:47.193746   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:13:47.193757   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:13:47.215471   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:13:47.215481   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:13:47.253610   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:13:47.253619   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:13:47.267933   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:13:47.267943   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:13:47.282186   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:13:47.282198   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:13:47.293428   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:13:47.293440   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:13:47.305083   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:13:47.305095   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:13:47.320040   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:13:47.320049   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:13:47.337140   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:13:47.337151   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:13:49.856899   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:13:54.859155   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:13:54.859577   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:13:54.898568   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:13:54.898730   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:13:54.919978   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:13:54.920079   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:13:54.935651   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:13:54.935752   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:13:54.948350   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:13:54.948433   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:13:54.962187   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:13:54.962269   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:13:54.972664   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:13:54.972746   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:13:54.982593   14863 logs.go:276] 0 containers: []
	W0415 05:13:54.982605   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:13:54.982676   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:13:54.993021   14863 logs.go:276] 0 containers: []
	W0415 05:13:54.993036   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:13:54.993044   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:13:54.993050   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:13:55.032943   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:13:55.032958   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:13:55.037790   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:13:55.037797   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:13:55.064032   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:13:55.064043   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:13:55.075793   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:13:55.075804   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:13:55.094243   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:13:55.094253   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:13:55.133337   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:13:55.133349   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:13:55.147472   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:13:55.147481   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:13:55.165799   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:13:55.165809   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:13:55.187511   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:13:55.187522   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:13:55.199107   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:13:55.199117   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:13:55.214031   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:13:55.214042   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:13:55.231801   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:13:55.231812   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:13:55.243229   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:13:55.243240   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:13:55.259397   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:13:55.259406   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:13:57.785473   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:02.787654   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:02.787971   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:02.825626   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:14:02.825781   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:02.852147   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:14:02.852244   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:02.866221   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:14:02.866319   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:02.878314   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:14:02.878394   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:02.888447   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:14:02.888525   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:02.898909   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:14:02.898987   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:02.908991   14863 logs.go:276] 0 containers: []
	W0415 05:14:02.909003   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:02.909070   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:02.919880   14863 logs.go:276] 0 containers: []
	W0415 05:14:02.919893   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:14:02.919903   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:14:02.919908   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:14:02.944748   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:14:02.944762   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:14:02.965028   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:14:02.965038   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:02.977075   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:14:02.977088   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:14:02.994033   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:14:02.994043   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:14:03.008340   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:14:03.008372   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:14:03.028114   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:14:03.028124   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:14:03.039491   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:14:03.039503   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:14:03.051312   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:14:03.051325   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:14:03.070803   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:03.070814   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:14:03.108176   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:03.108193   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:14:03.144091   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:03.144102   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:03.148547   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:14:03.148554   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:14:03.163455   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:14:03.163465   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:14:03.174829   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:03.174839   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:05.699118   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:10.701226   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:10.701644   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:10.741765   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:14:10.741928   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:10.762486   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:14:10.762603   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:10.779333   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:14:10.779422   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:10.791602   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:14:10.791684   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:10.802287   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:14:10.802363   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:10.812966   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:14:10.813051   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:10.823183   14863 logs.go:276] 0 containers: []
	W0415 05:14:10.823193   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:10.823256   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:10.833546   14863 logs.go:276] 0 containers: []
	W0415 05:14:10.833562   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:14:10.833570   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:10.833577   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:14:10.871440   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:10.871452   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:14:10.906028   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:14:10.906043   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:14:10.917570   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:14:10.917582   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:14:10.934611   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:14:10.934622   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:10.948705   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:14:10.948720   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:14:10.973253   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:10.973266   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:10.996612   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:14:10.996622   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:14:11.010728   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:14:11.010738   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:14:11.023774   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:14:11.023783   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:14:11.038907   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:14:11.038918   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:14:11.058861   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:11.058870   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:11.062972   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:14:11.062980   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:14:11.077442   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:14:11.077453   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:14:11.091547   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:14:11.091557   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:14:13.605571   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:18.607547   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:18.607686   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:18.624878   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:14:18.624986   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:18.638655   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:14:18.638754   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:18.650632   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:14:18.650707   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:18.660922   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:14:18.661006   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:18.671604   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:14:18.671681   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:18.681772   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:14:18.681855   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:18.692055   14863 logs.go:276] 0 containers: []
	W0415 05:14:18.692067   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:18.692135   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:18.702300   14863 logs.go:276] 0 containers: []
	W0415 05:14:18.702311   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:14:18.702319   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:18.702325   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:18.706498   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:14:18.706505   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:14:18.720561   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:14:18.720572   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:18.732053   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:14:18.732064   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:14:18.750340   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:14:18.750351   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:14:18.761947   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:14:18.761959   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:14:18.779021   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:18.779031   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:14:18.817619   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:18.817628   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:14:18.851825   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:14:18.851835   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:14:18.870593   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:14:18.870605   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:14:18.890316   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:18.890327   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:18.913213   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:14:18.913232   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:14:18.939420   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:14:18.939431   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:14:18.951648   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:14:18.951658   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:14:18.966584   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:14:18.966594   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:14:21.480492   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:26.482692   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:26.482897   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:26.505238   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:14:26.505348   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:26.520943   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:14:26.521029   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:26.533525   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:14:26.533608   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:26.544900   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:14:26.544984   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:26.559345   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:14:26.559430   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:26.570166   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:14:26.570239   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:26.581255   14863 logs.go:276] 0 containers: []
	W0415 05:14:26.581266   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:26.581336   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:26.591790   14863 logs.go:276] 0 containers: []
	W0415 05:14:26.591808   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:14:26.591817   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:26.591823   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:26.596245   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:14:26.596252   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:14:26.614564   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:14:26.614577   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:14:26.625738   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:14:26.625750   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:14:26.643838   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:14:26.643851   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:14:26.664370   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:26.664381   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:14:26.702279   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:14:26.702289   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:14:26.720720   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:14:26.720731   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:14:26.734790   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:14:26.734799   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:14:26.759193   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:14:26.759204   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:14:26.770920   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:14:26.770931   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:14:26.786255   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:26.786265   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:26.810080   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:26.810087   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:14:26.843718   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:14:26.843728   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:26.855661   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:14:26.855673   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:14:29.374991   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:34.376986   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:34.377114   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:34.389581   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:14:34.389677   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:34.400631   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:14:34.400710   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:34.411198   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:14:34.411273   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:34.422131   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:14:34.422207   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:34.433454   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:14:34.433542   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:34.450964   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:14:34.451038   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:34.461035   14863 logs.go:276] 0 containers: []
	W0415 05:14:34.461049   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:34.461115   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:34.472170   14863 logs.go:276] 0 containers: []
	W0415 05:14:34.472181   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:14:34.472188   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:34.472193   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:14:34.509712   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:34.509722   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:34.514038   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:14:34.514046   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:34.526160   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:14:34.526171   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:14:34.540280   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:14:34.540290   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:14:34.552131   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:34.552142   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:34.575430   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:34.575438   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:14:34.608932   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:14:34.608943   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:14:34.623491   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:14:34.623502   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:14:34.639550   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:14:34.639560   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:14:34.651309   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:14:34.651323   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:14:34.668122   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:14:34.668132   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:14:34.691905   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:14:34.691915   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:14:34.703579   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:14:34.703590   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:14:34.718376   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:14:34.718387   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:14:37.239193   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:42.241267   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:42.241496   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:42.268303   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:14:42.268441   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:42.285741   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:14:42.285843   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:42.304848   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:14:42.304920   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:42.316644   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:14:42.316729   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:42.328261   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:14:42.328354   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:42.347145   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:14:42.347216   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:42.357772   14863 logs.go:276] 0 containers: []
	W0415 05:14:42.357786   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:42.357859   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:42.367875   14863 logs.go:276] 0 containers: []
	W0415 05:14:42.367886   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:14:42.367894   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:42.367918   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:14:42.405119   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:14:42.405129   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:14:42.419735   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:14:42.419744   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:14:42.431572   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:14:42.431582   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:14:42.442873   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:14:42.442884   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:14:42.457660   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:14:42.457670   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:14:42.483787   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:14:42.483805   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:14:42.507883   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:42.507894   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:42.531238   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:14:42.531248   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:14:42.548909   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:14:42.548921   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:14:42.569755   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:42.569765   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:42.573879   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:42.573885   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:14:42.608281   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:14:42.608294   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:14:42.622033   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:14:42.622043   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:14:42.632467   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:14:42.632478   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:45.146620   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:50.148760   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:50.149078   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:50.177130   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:14:50.177276   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:50.195686   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:14:50.195796   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:50.209820   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:14:50.209904   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:50.222609   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:14:50.222679   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:50.232857   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:14:50.232917   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:50.243532   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:14:50.243620   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:50.255807   14863 logs.go:276] 0 containers: []
	W0415 05:14:50.255820   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:50.255890   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:50.265966   14863 logs.go:276] 0 containers: []
	W0415 05:14:50.265978   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:14:50.265991   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:14:50.265996   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:14:50.280182   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:14:50.280192   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:14:50.297549   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:14:50.297560   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:50.312302   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:50.312313   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:14:50.349803   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:50.349815   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:50.353944   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:50.353950   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:14:50.386436   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:14:50.386445   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:14:50.400551   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:14:50.400561   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:14:50.424541   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:14:50.424551   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:14:50.444711   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:14:50.444720   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:14:50.456678   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:50.456689   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:50.480038   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:14:50.480046   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:14:50.493575   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:14:50.493585   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:14:50.504983   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:14:50.504994   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:14:50.517042   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:14:50.517052   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:14:53.034084   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:14:58.036214   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:14:58.036402   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:14:58.053337   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:14:58.053433   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:14:58.065906   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:14:58.065992   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:14:58.077163   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:14:58.077230   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:14:58.087923   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:14:58.088126   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:14:58.100035   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:14:58.100123   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:14:58.110752   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:14:58.110836   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:14:58.120285   14863 logs.go:276] 0 containers: []
	W0415 05:14:58.120299   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:14:58.120379   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:14:58.129987   14863 logs.go:276] 0 containers: []
	W0415 05:14:58.129998   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:14:58.130007   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:14:58.130012   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:14:58.142774   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:14:58.142784   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:14:58.156881   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:14:58.156895   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:14:58.169355   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:14:58.169366   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:14:58.186818   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:14:58.186828   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:14:58.206106   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:14:58.206115   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:14:58.229054   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:14:58.229063   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:14:58.233265   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:14:58.233272   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:14:58.247346   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:14:58.247357   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:14:58.266089   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:14:58.266100   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:14:58.278119   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:14:58.278130   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:14:58.297707   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:14:58.297718   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:14:58.322209   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:14:58.322219   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:14:58.337955   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:14:58.337966   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:14:58.373919   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:14:58.373927   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:15:00.909091   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:05.911387   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:05.911746   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:15:05.942008   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:15:05.942138   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:15:05.959872   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:15:05.959980   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:15:05.973791   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:15:05.973881   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:15:05.985577   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:15:05.985670   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:15:05.996451   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:15:05.996535   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:15:06.018016   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:15:06.018097   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:15:06.028183   14863 logs.go:276] 0 containers: []
	W0415 05:15:06.028194   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:15:06.028253   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:15:06.038793   14863 logs.go:276] 0 containers: []
	W0415 05:15:06.038803   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:15:06.038813   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:15:06.038819   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:15:06.052277   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:15:06.052288   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:15:06.063623   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:15:06.063636   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:15:06.076875   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:15:06.076885   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:15:06.099217   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:15:06.099229   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:15:06.128161   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:15:06.128171   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:15:06.145546   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:15:06.145559   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:15:06.157245   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:15:06.157256   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:15:06.161330   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:15:06.161336   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:15:06.198088   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:15:06.198098   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:15:06.212116   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:15:06.212126   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:15:06.230509   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:15:06.230520   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:15:06.250436   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:15:06.250447   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:15:06.288003   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:15:06.288012   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:15:06.302322   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:15:06.302331   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:15:08.821359   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:13.822500   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:13.822802   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:15:13.855669   14863 logs.go:276] 2 containers: [28bcee80e1fa f38c3f6989c2]
	I0415 05:15:13.855829   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:15:13.874592   14863 logs.go:276] 2 containers: [b02bdb06ba9c 0b788794f1b5]
	I0415 05:15:13.874706   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:15:13.892817   14863 logs.go:276] 1 containers: [6a1a696bc410]
	I0415 05:15:13.892898   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:15:13.904616   14863 logs.go:276] 2 containers: [41a00b32ae4a e499c934f641]
	I0415 05:15:13.904696   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:15:13.915245   14863 logs.go:276] 1 containers: [e82953f91271]
	I0415 05:15:13.915322   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:15:13.925469   14863 logs.go:276] 2 containers: [3cd73879aa8a c93d5a242d94]
	I0415 05:15:13.925557   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:15:13.935296   14863 logs.go:276] 0 containers: []
	W0415 05:15:13.935311   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:15:13.935380   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:15:13.946081   14863 logs.go:276] 0 containers: []
	W0415 05:15:13.946091   14863 logs.go:278] No container was found matching "storage-provisioner"
	I0415 05:15:13.946099   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:15:13.946104   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:15:13.967473   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:15:13.967481   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:15:14.010522   14863 logs.go:123] Gathering logs for etcd [0b788794f1b5] ...
	I0415 05:15:14.010534   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b788794f1b5"
	I0415 05:15:14.026606   14863 logs.go:123] Gathering logs for kube-scheduler [41a00b32ae4a] ...
	I0415 05:15:14.026618   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a00b32ae4a"
	I0415 05:15:14.039075   14863 logs.go:123] Gathering logs for kube-proxy [e82953f91271] ...
	I0415 05:15:14.039088   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82953f91271"
	I0415 05:15:14.051563   14863 logs.go:123] Gathering logs for kube-apiserver [f38c3f6989c2] ...
	I0415 05:15:14.051575   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f38c3f6989c2"
	I0415 05:15:14.075417   14863 logs.go:123] Gathering logs for coredns [6a1a696bc410] ...
	I0415 05:15:14.075428   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1a696bc410"
	I0415 05:15:14.086514   14863 logs.go:123] Gathering logs for kube-controller-manager [c93d5a242d94] ...
	I0415 05:15:14.086526   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93d5a242d94"
	I0415 05:15:14.107217   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:15:14.107229   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:15:14.146069   14863 logs.go:123] Gathering logs for kube-apiserver [28bcee80e1fa] ...
	I0415 05:15:14.146080   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28bcee80e1fa"
	I0415 05:15:14.160799   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:15:14.160810   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:15:14.172493   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:15:14.172504   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:15:14.176627   14863 logs.go:123] Gathering logs for etcd [b02bdb06ba9c] ...
	I0415 05:15:14.176634   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b02bdb06ba9c"
	I0415 05:15:14.190928   14863 logs.go:123] Gathering logs for kube-scheduler [e499c934f641] ...
	I0415 05:15:14.190939   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e499c934f641"
	I0415 05:15:14.206120   14863 logs.go:123] Gathering logs for kube-controller-manager [3cd73879aa8a] ...
	I0415 05:15:14.206147   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd73879aa8a"
	I0415 05:15:16.725409   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:21.727874   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:21.727984   14863 kubeadm.go:591] duration metric: took 4m3.440115541s to restartPrimaryControlPlane
	W0415 05:15:21.728068   14863 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0415 05:15:21.728106   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0415 05:15:22.706230   14863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 05:15:22.711152   14863 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 05:15:22.714223   14863 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 05:15:22.716804   14863 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 05:15:22.716810   14863 kubeadm.go:156] found existing configuration files:
	
	I0415 05:15:22.716842   14863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/admin.conf
	I0415 05:15:22.719434   14863 kubeadm.go:162] "https://control-plane.minikube.internal:52436" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 05:15:22.719461   14863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 05:15:22.722625   14863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/kubelet.conf
	I0415 05:15:22.725142   14863 kubeadm.go:162] "https://control-plane.minikube.internal:52436" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 05:15:22.725165   14863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 05:15:22.727963   14863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/controller-manager.conf
	I0415 05:15:22.731288   14863 kubeadm.go:162] "https://control-plane.minikube.internal:52436" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 05:15:22.731319   14863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 05:15:22.734566   14863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/scheduler.conf
	I0415 05:15:22.737202   14863 kubeadm.go:162] "https://control-plane.minikube.internal:52436" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52436 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 05:15:22.737236   14863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 05:15:22.739883   14863 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0415 05:15:22.758895   14863 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0415 05:15:22.758930   14863 kubeadm.go:309] [preflight] Running pre-flight checks
	I0415 05:15:22.806900   14863 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 05:15:22.806956   14863 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 05:15:22.807006   14863 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 05:15:22.855606   14863 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 05:15:22.860787   14863 out.go:204]   - Generating certificates and keys ...
	I0415 05:15:22.860855   14863 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0415 05:15:22.860899   14863 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0415 05:15:22.860935   14863 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0415 05:15:22.860966   14863 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0415 05:15:22.861003   14863 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0415 05:15:22.861036   14863 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0415 05:15:22.861073   14863 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0415 05:15:22.861105   14863 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0415 05:15:22.861152   14863 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0415 05:15:22.861194   14863 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0415 05:15:22.861213   14863 kubeadm.go:309] [certs] Using the existing "sa" key
	I0415 05:15:22.861240   14863 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 05:15:23.035702   14863 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 05:15:23.082763   14863 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 05:15:23.132643   14863 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 05:15:23.176657   14863 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 05:15:23.206560   14863 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 05:15:23.206894   14863 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 05:15:23.206966   14863 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0415 05:15:23.294768   14863 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 05:15:23.298990   14863 out.go:204]   - Booting up control plane ...
	I0415 05:15:23.299039   14863 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 05:15:23.299079   14863 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 05:15:23.299113   14863 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 05:15:23.299155   14863 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 05:15:23.299237   14863 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0415 05:15:27.299919   14863 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.001250 seconds
	I0415 05:15:27.300030   14863 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 05:15:27.304410   14863 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 05:15:27.812228   14863 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0415 05:15:27.812341   14863 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-413000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 05:15:28.316333   14863 kubeadm.go:309] [bootstrap-token] Using token: 4kyxh7.4qpbypddm1axy9ri
	I0415 05:15:28.322249   14863 out.go:204]   - Configuring RBAC rules ...
	I0415 05:15:28.322310   14863 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 05:15:28.322354   14863 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 05:15:28.329037   14863 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 05:15:28.329999   14863 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 05:15:28.330914   14863 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 05:15:28.331754   14863 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 05:15:28.335165   14863 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 05:15:28.523420   14863 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0415 05:15:28.720363   14863 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0415 05:15:28.720688   14863 kubeadm.go:309] 
	I0415 05:15:28.720716   14863 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0415 05:15:28.720721   14863 kubeadm.go:309] 
	I0415 05:15:28.720755   14863 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0415 05:15:28.720758   14863 kubeadm.go:309] 
	I0415 05:15:28.720768   14863 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0415 05:15:28.720793   14863 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 05:15:28.720826   14863 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 05:15:28.720831   14863 kubeadm.go:309] 
	I0415 05:15:28.720869   14863 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0415 05:15:28.720873   14863 kubeadm.go:309] 
	I0415 05:15:28.720895   14863 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 05:15:28.720897   14863 kubeadm.go:309] 
	I0415 05:15:28.720924   14863 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0415 05:15:28.720963   14863 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 05:15:28.721007   14863 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 05:15:28.721012   14863 kubeadm.go:309] 
	I0415 05:15:28.721069   14863 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0415 05:15:28.721112   14863 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0415 05:15:28.721116   14863 kubeadm.go:309] 
	I0415 05:15:28.721154   14863 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4kyxh7.4qpbypddm1axy9ri \
	I0415 05:15:28.721215   14863 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bccfbc6802506c01fbf9bb31ef37a61b86ce5e1e10fb5fd68834dd4a5a1b8043 \
	I0415 05:15:28.721233   14863 kubeadm.go:309] 	--control-plane 
	I0415 05:15:28.721241   14863 kubeadm.go:309] 
	I0415 05:15:28.721286   14863 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0415 05:15:28.721290   14863 kubeadm.go:309] 
	I0415 05:15:28.721344   14863 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4kyxh7.4qpbypddm1axy9ri \
	I0415 05:15:28.721398   14863 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bccfbc6802506c01fbf9bb31ef37a61b86ce5e1e10fb5fd68834dd4a5a1b8043 
	I0415 05:15:28.721678   14863 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 05:15:28.721687   14863 cni.go:84] Creating CNI manager for ""
	I0415 05:15:28.721696   14863 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:15:28.725220   14863 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0415 05:15:28.732222   14863 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0415 05:15:28.735278   14863 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0415 05:15:28.740205   14863 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0415 05:15:28.740305   14863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 05:15:28.740306   14863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-413000 minikube.k8s.io/updated_at=2024_04_15T05_15_28_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=fd91a0c5dbcf69c10661a6c45f66c039ce7b5f02 minikube.k8s.io/name=stopped-upgrade-413000 minikube.k8s.io/primary=true
	I0415 05:15:28.779985   14863 ops.go:34] apiserver oom_adj: -16
	I0415 05:15:28.788472   14863 kubeadm.go:1107] duration metric: took 48.235417ms to wait for elevateKubeSystemPrivileges
	W0415 05:15:28.788502   14863 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0415 05:15:28.788506   14863 kubeadm.go:393] duration metric: took 4m10.514978042s to StartCluster
	I0415 05:15:28.788517   14863 settings.go:142] acquiring lock: {Name:mkd1677925079a7213680748c8d1dd3d4ca25e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:15:28.788603   14863 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:15:28.789018   14863 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/kubeconfig: {Name:mk14f6faef80a57910c6a54d13a440297d06327c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:15:28.789226   14863 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:15:28.792261   14863 out.go:177] * Verifying Kubernetes components...
	I0415 05:15:28.789233   14863 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0415 05:15:28.789305   14863 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:15:28.800262   14863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 05:15:28.800296   14863 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-413000"
	I0415 05:15:28.800298   14863 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-413000"
	I0415 05:15:28.800308   14863 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-413000"
	I0415 05:15:28.800311   14863 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-413000"
	W0415 05:15:28.800314   14863 addons.go:243] addon storage-provisioner should already be in state true
	I0415 05:15:28.800332   14863 host.go:66] Checking if "stopped-upgrade-413000" exists ...
	I0415 05:15:28.805230   14863 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 05:15:28.809246   14863 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 05:15:28.809252   14863 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 05:15:28.809261   14863 sshutil.go:53] new ssh client: &{IP:localhost Port:52403 SSHKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	I0415 05:15:28.810550   14863 kapi.go:59] client config for stopped-upgrade-413000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/stopped-upgrade-413000/client.key", CAFile:"/Users/jenkins/minikube-integration/18644-12307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103c93c80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 05:15:28.810669   14863 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-413000"
	W0415 05:15:28.810675   14863 addons.go:243] addon default-storageclass should already be in state true
	I0415 05:15:28.810686   14863 host.go:66] Checking if "stopped-upgrade-413000" exists ...
	I0415 05:15:28.811351   14863 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 05:15:28.811356   14863 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 05:15:28.811360   14863 sshutil.go:53] new ssh client: &{IP:localhost Port:52403 SSHKeyPath:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	I0415 05:15:28.872906   14863 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 05:15:28.877838   14863 api_server.go:52] waiting for apiserver process to appear ...
	I0415 05:15:28.877887   14863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 05:15:28.886058   14863 api_server.go:72] duration metric: took 96.824167ms to wait for apiserver process to appear ...
	I0415 05:15:28.886070   14863 api_server.go:88] waiting for apiserver healthz status ...
	I0415 05:15:28.886078   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:28.886281   14863 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 05:15:28.889184   14863 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 05:15:33.888050   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:33.888135   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:38.888609   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:38.888633   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:43.888864   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:43.888886   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:48.889239   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:48.889264   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:53.889764   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:53.889787   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:15:58.890687   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:15:58.890709   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0415 05:15:59.287240   14863 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0415 05:15:59.294515   14863 out.go:177] * Enabled addons: storage-provisioner
	I0415 05:15:59.304433   14863 addons.go:505] duration metric: took 30.516468s for enable addons: enabled=[storage-provisioner]
	I0415 05:16:03.891650   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:16:03.891675   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:16:08.892913   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:16:08.892942   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:16:13.894612   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:16:13.894658   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:16:18.896808   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:16:18.896833   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:16:23.898815   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:16:23.898843   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:16:28.900265   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:16:28.900374   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:16:28.911332   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:16:28.911427   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:16:28.937482   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:16:28.937575   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:16:28.952098   14863 logs.go:276] 2 containers: [7a43e07ab8d8 f75f7cb569e4]
	I0415 05:16:28.952188   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:16:28.973030   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:16:28.973113   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:16:28.983489   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:16:28.983573   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:16:28.994476   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:16:28.994556   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:16:29.005095   14863 logs.go:276] 0 containers: []
	W0415 05:16:29.005107   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:16:29.005175   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:16:29.015673   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:16:29.015689   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:16:29.015694   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:16:29.030420   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:16:29.030431   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:16:29.065118   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:16:29.065128   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:16:29.100051   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:16:29.100065   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:16:29.114798   14863 logs.go:123] Gathering logs for coredns [f75f7cb569e4] ...
	I0415 05:16:29.114809   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f75f7cb569e4"
	I0415 05:16:29.127288   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:16:29.127299   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:16:29.144663   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:16:29.144674   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:16:29.165806   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:16:29.165817   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:16:29.188662   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:16:29.188670   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:16:29.199544   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:16:29.199556   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:16:29.203754   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:16:29.203762   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:16:29.220203   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:16:29.220214   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:16:29.231669   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:16:29.231680   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:16:31.745072   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:16:36.746308   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:16:36.746499   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:16:36.763758   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:16:36.763841   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:16:36.777148   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:16:36.777226   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:16:36.787663   14863 logs.go:276] 2 containers: [7a43e07ab8d8 f75f7cb569e4]
	I0415 05:16:36.787741   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:16:36.798352   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:16:36.798432   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:16:36.809768   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:16:36.809853   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:16:36.820323   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:16:36.820399   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:16:36.830774   14863 logs.go:276] 0 containers: []
	W0415 05:16:36.830788   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:16:36.830870   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:16:36.840845   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:16:36.840858   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:16:36.840864   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:16:36.874598   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:16:36.874609   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:16:36.909446   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:16:36.909457   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:16:36.924227   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:16:36.924237   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:16:36.937879   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:16:36.937890   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:16:36.948775   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:16:36.948784   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:16:36.971963   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:16:36.971974   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:16:36.985070   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:16:36.985081   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:16:36.989400   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:16:36.989407   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:16:37.003615   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:16:37.003626   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:16:37.015824   14863 logs.go:123] Gathering logs for coredns [f75f7cb569e4] ...
	I0415 05:16:37.015834   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f75f7cb569e4"
	I0415 05:16:37.027744   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:16:37.027755   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:16:37.042558   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:16:37.042570   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:16:39.563871   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:16:44.564793   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:16:44.565149   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:16:44.607924   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:16:44.608081   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:16:44.626814   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:16:44.626911   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:16:44.640719   14863 logs.go:276] 2 containers: [7a43e07ab8d8 f75f7cb569e4]
	I0415 05:16:44.640814   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:16:44.652017   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:16:44.652091   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:16:44.662290   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:16:44.662365   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:16:44.672897   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:16:44.672975   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:16:44.682974   14863 logs.go:276] 0 containers: []
	W0415 05:16:44.682987   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:16:44.683058   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:16:44.694146   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:16:44.694168   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:16:44.694174   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:16:44.713457   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:16:44.713467   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:16:44.731042   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:16:44.731052   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:16:44.744550   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:16:44.744561   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:16:44.778642   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:16:44.778653   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:16:44.820593   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:16:44.820603   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:16:44.835297   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:16:44.835308   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:16:44.847209   14863 logs.go:123] Gathering logs for coredns [f75f7cb569e4] ...
	I0415 05:16:44.847219   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f75f7cb569e4"
	I0415 05:16:44.858988   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:16:44.858999   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:16:44.863441   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:16:44.863447   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:16:44.878510   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:16:44.878521   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:16:44.895204   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:16:44.895215   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:16:44.913580   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:16:44.913590   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:16:47.439522   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:16:52.442135   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:16:52.442452   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:16:52.479572   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:16:52.479702   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:16:52.501406   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:16:52.501513   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:16:52.515681   14863 logs.go:276] 2 containers: [7a43e07ab8d8 f75f7cb569e4]
	I0415 05:16:52.515774   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:16:52.528175   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:16:52.528258   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:16:52.538791   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:16:52.538868   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:16:52.552991   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:16:52.553064   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:16:52.569081   14863 logs.go:276] 0 containers: []
	W0415 05:16:52.569094   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:16:52.569154   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:16:52.579407   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:16:52.579420   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:16:52.579426   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:16:52.583675   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:16:52.583683   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:16:52.618318   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:16:52.618330   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:16:52.633341   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:16:52.633352   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:16:52.647547   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:16:52.647557   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:16:52.658829   14863 logs.go:123] Gathering logs for coredns [f75f7cb569e4] ...
	I0415 05:16:52.658838   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f75f7cb569e4"
	I0415 05:16:52.670284   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:16:52.670296   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:16:52.681912   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:16:52.681921   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:16:52.693269   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:16:52.693282   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:16:52.704317   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:16:52.704327   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:16:52.737337   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:16:52.737344   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:16:52.752285   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:16:52.752296   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:16:52.770019   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:16:52.770028   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:16:55.296657   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:17:00.298787   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:17:00.298962   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:17:00.311629   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:17:00.311712   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:17:00.321497   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:17:00.321563   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:17:00.331899   14863 logs.go:276] 2 containers: [7a43e07ab8d8 f75f7cb569e4]
	I0415 05:17:00.331978   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:17:00.342296   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:17:00.342365   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:17:00.352500   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:17:00.352577   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:17:00.362806   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:17:00.362875   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:17:00.372815   14863 logs.go:276] 0 containers: []
	W0415 05:17:00.372826   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:17:00.372885   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:17:00.382680   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:17:00.382697   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:17:00.382703   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:17:00.400816   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:17:00.400825   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:17:00.424950   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:17:00.424962   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:17:00.436478   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:17:00.436488   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:17:00.471703   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:17:00.471713   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:17:00.475925   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:17:00.475931   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:17:00.493486   14863 logs.go:123] Gathering logs for coredns [f75f7cb569e4] ...
	I0415 05:17:00.493496   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f75f7cb569e4"
	I0415 05:17:00.504930   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:17:00.504941   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:17:00.519924   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:17:00.519935   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:17:00.531373   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:17:00.531383   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:17:00.543251   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:17:00.543261   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:17:00.577528   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:17:00.577537   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:17:00.591228   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:17:00.591237   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:17:03.111086   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:17:08.113218   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:17:08.113634   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:17:08.151451   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:17:08.151605   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:17:08.172038   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:17:08.172168   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:17:08.188849   14863 logs.go:276] 2 containers: [7a43e07ab8d8 f75f7cb569e4]
	I0415 05:17:08.188924   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:17:08.200471   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:17:08.200542   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:17:08.211170   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:17:08.211249   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:17:08.221530   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:17:08.221598   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:17:08.231697   14863 logs.go:276] 0 containers: []
	W0415 05:17:08.231713   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:17:08.231776   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:17:08.241855   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:17:08.241872   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:17:08.241878   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:17:08.255223   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:17:08.255235   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:17:08.266413   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:17:08.266424   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:17:08.279486   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:17:08.279495   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:17:08.296658   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:17:08.296668   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:17:08.307824   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:17:08.307835   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:17:08.341846   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:17:08.341856   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:17:08.346567   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:17:08.346573   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:17:08.360388   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:17:08.360398   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:17:08.371829   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:17:08.371840   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:17:08.394713   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:17:08.394726   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:17:08.438445   14863 logs.go:123] Gathering logs for coredns [f75f7cb569e4] ...
	I0415 05:17:08.438456   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f75f7cb569e4"
	I0415 05:17:08.450441   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:17:08.450452   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:17:10.966418   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:17:15.968949   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:17:15.969205   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:17:15.996651   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:17:15.996779   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:17:16.014610   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:17:16.014703   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:17:16.028116   14863 logs.go:276] 2 containers: [7a43e07ab8d8 f75f7cb569e4]
	I0415 05:17:16.028190   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:17:16.043382   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:17:16.043456   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:17:16.060108   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:17:16.060189   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:17:16.070332   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:17:16.070399   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:17:16.080720   14863 logs.go:276] 0 containers: []
	W0415 05:17:16.080733   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:17:16.080797   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:17:16.096160   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:17:16.096175   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:17:16.096180   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:17:16.120429   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:17:16.120437   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:17:16.131809   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:17:16.131820   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:17:16.135953   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:17:16.135959   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:17:16.147146   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:17:16.147155   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:17:16.158473   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:17:16.158482   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:17:16.172460   14863 logs.go:123] Gathering logs for coredns [f75f7cb569e4] ...
	I0415 05:17:16.172471   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f75f7cb569e4"
	I0415 05:17:16.187442   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:17:16.187452   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:17:16.202151   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:17:16.202161   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:17:16.218898   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:17:16.218907   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:17:16.235836   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:17:16.235847   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:17:16.270645   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:17:16.270653   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:17:16.305551   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:17:16.305562   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:17:18.820359   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:17:23.822506   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:17:23.822752   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:17:23.844271   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:17:23.844396   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:17:23.859872   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:17:23.859956   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:17:23.872442   14863 logs.go:276] 2 containers: [7a43e07ab8d8 f75f7cb569e4]
	I0415 05:17:23.872514   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:17:23.883310   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:17:23.883383   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:17:23.893448   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:17:23.893522   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:17:23.903614   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:17:23.903695   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:17:23.913161   14863 logs.go:276] 0 containers: []
	W0415 05:17:23.913172   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:17:23.913239   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:17:23.923944   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:17:23.923962   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:17:23.923967   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:17:23.934975   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:17:23.934984   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:17:23.964437   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:17:23.964447   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:17:23.998122   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:17:23.998132   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:17:24.002634   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:17:24.002644   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:17:24.077815   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:17:24.077826   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:17:24.092133   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:17:24.092143   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:17:24.103589   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:17:24.103599   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:17:24.118406   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:17:24.118415   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:17:24.136615   14863 logs.go:123] Gathering logs for coredns [f75f7cb569e4] ...
	I0415 05:17:24.136624   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f75f7cb569e4"
	I0415 05:17:24.147819   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:17:24.147830   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:17:24.159592   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:17:24.159601   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:17:24.177104   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:17:24.177114   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:17:26.691099   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:17:31.691997   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:17:31.692200   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:17:31.708626   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:17:31.708714   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:17:31.721842   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:17:31.721925   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:17:31.732696   14863 logs.go:276] 3 containers: [c2f804a817f4 7a43e07ab8d8 f75f7cb569e4]
	I0415 05:17:31.732773   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:17:31.742914   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:17:31.742996   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:17:31.754318   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:17:31.754396   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:17:31.764484   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:17:31.764567   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:17:31.774607   14863 logs.go:276] 0 containers: []
	W0415 05:17:31.774624   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:17:31.774697   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:17:31.785159   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:17:31.785178   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:17:31.785184   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:17:31.801939   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:17:31.801949   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:17:31.813422   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:17:31.813432   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:17:31.837025   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:17:31.837035   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:17:31.872296   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:17:31.872309   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:17:31.886702   14863 logs.go:123] Gathering logs for coredns [f75f7cb569e4] ...
	I0415 05:17:31.886714   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f75f7cb569e4"
	I0415 05:17:31.898426   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:17:31.898437   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:17:31.922933   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:17:31.922942   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:17:31.934552   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:17:31.934561   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:17:31.951913   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:17:31.951923   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:17:31.985697   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:17:31.985707   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:17:31.997531   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:17:31.997543   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:17:32.002216   14863 logs.go:123] Gathering logs for coredns [c2f804a817f4] ...
	I0415 05:17:32.002222   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2f804a817f4"
	I0415 05:17:32.013216   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:17:32.013227   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:17:34.526966   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:17:39.529224   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:17:39.529290   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:17:39.540058   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:17:39.540131   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:17:39.550636   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:17:39.550714   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:17:39.561180   14863 logs.go:276] 3 containers: [c2f804a817f4 7a43e07ab8d8 f75f7cb569e4]
	I0415 05:17:39.561256   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:17:39.573001   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:17:39.573081   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:17:39.583251   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:17:39.583326   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:17:39.593655   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:17:39.593726   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:17:39.603492   14863 logs.go:276] 0 containers: []
	W0415 05:17:39.603509   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:17:39.603575   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:17:39.617051   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:17:39.617069   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:17:39.617074   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:17:39.631046   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:17:39.631057   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:17:39.642490   14863 logs.go:123] Gathering logs for coredns [f75f7cb569e4] ...
	I0415 05:17:39.642499   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f75f7cb569e4"
	I0415 05:17:39.656383   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:17:39.656394   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:17:39.668498   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:17:39.668510   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:17:39.693341   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:17:39.693349   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:17:39.697465   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:17:39.697473   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:17:39.715835   14863 logs.go:123] Gathering logs for coredns [c2f804a817f4] ...
	I0415 05:17:39.715847   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2f804a817f4"
	I0415 05:17:39.747379   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:17:39.747394   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:17:39.784098   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:17:39.784115   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:17:39.831919   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:17:39.831931   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:17:39.849101   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:17:39.849111   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:17:39.861279   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:17:39.861290   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:17:39.876290   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:17:39.876301   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:17:42.390111   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:17:47.392307   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:17:47.392620   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:17:47.437929   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:17:47.438085   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:17:47.457366   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:17:47.457469   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:17:47.471924   14863 logs.go:276] 4 containers: [fdf453805976 c2f804a817f4 7a43e07ab8d8 f75f7cb569e4]
	I0415 05:17:47.472008   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:17:47.484519   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:17:47.484599   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:17:47.494772   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:17:47.494838   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:17:47.505101   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:17:47.505169   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:17:47.515215   14863 logs.go:276] 0 containers: []
	W0415 05:17:47.515228   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:17:47.515295   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:17:47.525423   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:17:47.525440   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:17:47.525445   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:17:47.559555   14863 logs.go:123] Gathering logs for coredns [c2f804a817f4] ...
	I0415 05:17:47.559564   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2f804a817f4"
	I0415 05:17:47.570682   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:17:47.570691   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:17:47.585326   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:17:47.585335   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:17:47.597117   14863 logs.go:123] Gathering logs for coredns [f75f7cb569e4] ...
	I0415 05:17:47.597130   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f75f7cb569e4"
	I0415 05:17:47.608953   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:17:47.608962   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:17:47.620512   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:17:47.620523   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:17:47.631845   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:17:47.631856   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:17:47.636032   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:17:47.636038   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:17:47.649801   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:17:47.649811   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:17:47.664728   14863 logs.go:123] Gathering logs for coredns [fdf453805976] ...
	I0415 05:17:47.664739   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf453805976"
	I0415 05:17:47.675383   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:17:47.675399   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:17:47.710435   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:17:47.710446   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:17:47.722437   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:17:47.722449   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:17:47.742183   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:17:47.742192   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:17:50.268875   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:17:55.271009   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:17:55.271423   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:17:55.309341   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:17:55.309486   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:17:55.331377   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:17:55.331523   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:17:55.346754   14863 logs.go:276] 4 containers: [fdf453805976 c2f804a817f4 7a43e07ab8d8 f75f7cb569e4]
	I0415 05:17:55.346840   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:17:55.359493   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:17:55.359579   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:17:55.369926   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:17:55.369990   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:17:55.381200   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:17:55.381265   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:17:55.392381   14863 logs.go:276] 0 containers: []
	W0415 05:17:55.392393   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:17:55.392462   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:17:55.408642   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:17:55.408658   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:17:55.408664   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:17:55.442830   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:17:55.442847   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:17:55.446816   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:17:55.446822   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:17:55.461240   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:17:55.461251   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:17:55.473000   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:17:55.473018   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:17:55.510251   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:17:55.510262   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:17:55.524963   14863 logs.go:123] Gathering logs for coredns [f75f7cb569e4] ...
	I0415 05:17:55.524973   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f75f7cb569e4"
	I0415 05:17:55.536755   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:17:55.536766   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:17:55.561978   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:17:55.561985   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:17:55.576071   14863 logs.go:123] Gathering logs for coredns [c2f804a817f4] ...
	I0415 05:17:55.576082   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2f804a817f4"
	I0415 05:17:55.587272   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:17:55.587283   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:17:55.598777   14863 logs.go:123] Gathering logs for coredns [fdf453805976] ...
	I0415 05:17:55.598787   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf453805976"
	I0415 05:17:55.611099   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:17:55.611110   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:17:55.623249   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:17:55.623259   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:17:55.634592   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:17:55.634602   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:17:58.153785   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:18:03.154667   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:18:03.154741   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:18:03.170826   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:18:03.170870   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:18:03.182490   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:18:03.182559   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:18:03.193529   14863 logs.go:276] 4 containers: [fdf453805976 c2f804a817f4 7a43e07ab8d8 f75f7cb569e4]
	I0415 05:18:03.193612   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:18:03.204651   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:18:03.204716   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:18:03.215819   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:18:03.215886   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:18:03.227742   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:18:03.227828   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:18:03.252687   14863 logs.go:276] 0 containers: []
	W0415 05:18:03.252701   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:18:03.252774   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:18:03.264772   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:18:03.264794   14863 logs.go:123] Gathering logs for coredns [fdf453805976] ...
	I0415 05:18:03.264800   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf453805976"
	I0415 05:18:03.277865   14863 logs.go:123] Gathering logs for coredns [f75f7cb569e4] ...
	I0415 05:18:03.277879   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f75f7cb569e4"
	I0415 05:18:03.291682   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:18:03.291693   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:18:03.310520   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:18:03.310538   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:18:03.336596   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:18:03.336614   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:18:03.374045   14863 logs.go:123] Gathering logs for coredns [c2f804a817f4] ...
	I0415 05:18:03.374056   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2f804a817f4"
	I0415 05:18:03.389300   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:18:03.389309   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:18:03.404759   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:18:03.404770   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:18:03.409513   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:18:03.409520   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:18:03.444895   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:18:03.444908   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:18:03.459321   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:18:03.459332   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:18:03.473415   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:18:03.473424   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:18:03.486175   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:18:03.486184   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:18:03.498224   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:18:03.498240   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:18:03.509855   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:18:03.509868   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:18:06.023393   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:18:11.025403   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:18:11.025501   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:18:11.037168   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:18:11.037247   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:18:11.047959   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:18:11.048038   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:18:11.057901   14863 logs.go:276] 4 containers: [fdf453805976 c2f804a817f4 7a43e07ab8d8 f75f7cb569e4]
	I0415 05:18:11.057981   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:18:11.068516   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:18:11.068587   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:18:11.079220   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:18:11.079291   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:18:11.089414   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:18:11.089480   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:18:11.099336   14863 logs.go:276] 0 containers: []
	W0415 05:18:11.099348   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:18:11.099426   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:18:11.109685   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:18:11.109705   14863 logs.go:123] Gathering logs for coredns [fdf453805976] ...
	I0415 05:18:11.109715   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf453805976"
	I0415 05:18:11.121318   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:18:11.121329   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:18:11.136279   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:18:11.136290   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:18:11.153230   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:18:11.153238   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:18:11.177562   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:18:11.177569   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:18:11.211600   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:18:11.211610   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:18:11.216148   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:18:11.216153   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:18:11.230260   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:18:11.230272   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:18:11.241961   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:18:11.241971   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:18:11.260134   14863 logs.go:123] Gathering logs for coredns [c2f804a817f4] ...
	I0415 05:18:11.260145   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2f804a817f4"
	I0415 05:18:11.272736   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:18:11.272747   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:18:11.283977   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:18:11.283986   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:18:11.318886   14863 logs.go:123] Gathering logs for coredns [f75f7cb569e4] ...
	I0415 05:18:11.318895   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f75f7cb569e4"
	I0415 05:18:11.330683   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:18:11.330694   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:18:11.342599   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:18:11.342609   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:18:13.856141   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:18:18.858252   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:18:18.858457   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:18:18.880436   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:18:18.880554   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:18:18.898619   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:18:18.898701   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:18:18.914689   14863 logs.go:276] 4 containers: [fdf453805976 c2f804a817f4 7a43e07ab8d8 f75f7cb569e4]
	I0415 05:18:18.914768   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:18:18.924971   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:18:18.925045   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:18:18.935278   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:18:18.935341   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:18:18.945710   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:18:18.945780   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:18:18.955536   14863 logs.go:276] 0 containers: []
	W0415 05:18:18.955549   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:18:18.955614   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:18:18.969549   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:18:18.969569   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:18:18.969574   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:18:19.003036   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:18:19.003046   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:18:19.038631   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:18:19.038640   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:18:19.053500   14863 logs.go:123] Gathering logs for coredns [c2f804a817f4] ...
	I0415 05:18:19.053510   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2f804a817f4"
	I0415 05:18:19.064848   14863 logs.go:123] Gathering logs for coredns [fdf453805976] ...
	I0415 05:18:19.064858   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf453805976"
	I0415 05:18:19.081382   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:18:19.081392   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:18:19.093088   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:18:19.093098   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:18:19.118130   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:18:19.118140   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:18:19.129381   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:18:19.129391   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:18:19.141301   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:18:19.141313   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:18:19.159201   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:18:19.159211   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:18:19.163889   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:18:19.163897   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:18:19.178107   14863 logs.go:123] Gathering logs for coredns [f75f7cb569e4] ...
	I0415 05:18:19.178116   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f75f7cb569e4"
	I0415 05:18:19.189400   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:18:19.189410   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:18:19.204091   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:18:19.204099   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:18:21.724807   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:18:26.727370   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:18:26.727434   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:18:26.738787   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:18:26.738867   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:18:26.753379   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:18:26.753430   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:18:26.764095   14863 logs.go:276] 4 containers: [fdf453805976 c2f804a817f4 7a43e07ab8d8 f75f7cb569e4]
	I0415 05:18:26.764173   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:18:26.778305   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:18:26.778386   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:18:26.791408   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:18:26.791464   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:18:26.803074   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:18:26.803141   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:18:26.815593   14863 logs.go:276] 0 containers: []
	W0415 05:18:26.815606   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:18:26.815669   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:18:26.827059   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:18:26.827077   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:18:26.827082   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:18:26.867264   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:18:26.867276   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:18:26.883292   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:18:26.883304   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:18:26.894992   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:18:26.895000   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:18:26.913443   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:18:26.913457   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:18:26.929945   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:18:26.929956   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:18:26.945717   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:18:26.945727   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:18:26.950056   14863 logs.go:123] Gathering logs for coredns [f75f7cb569e4] ...
	I0415 05:18:26.950068   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f75f7cb569e4"
	I0415 05:18:26.967292   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:18:26.967304   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:18:26.982514   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:18:26.982523   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:18:26.993693   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:18:26.993704   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:18:27.028411   14863 logs.go:123] Gathering logs for coredns [fdf453805976] ...
	I0415 05:18:27.028421   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf453805976"
	I0415 05:18:27.040704   14863 logs.go:123] Gathering logs for coredns [c2f804a817f4] ...
	I0415 05:18:27.040716   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2f804a817f4"
	I0415 05:18:27.053250   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:18:27.053262   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:18:27.080791   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:18:27.080804   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:18:29.595217   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:18:34.597782   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:18:34.598172   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:18:34.641493   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:18:34.641627   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:18:34.659020   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:18:34.659134   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:18:34.677656   14863 logs.go:276] 4 containers: [fdf453805976 c2f804a817f4 7a43e07ab8d8 f75f7cb569e4]
	I0415 05:18:34.677741   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:18:34.688718   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:18:34.688792   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:18:34.698871   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:18:34.698942   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:18:34.709547   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:18:34.709624   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:18:34.719726   14863 logs.go:276] 0 containers: []
	W0415 05:18:34.719738   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:18:34.719804   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:18:34.730265   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:18:34.730281   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:18:34.730286   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:18:34.749085   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:18:34.749099   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:18:34.763341   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:18:34.763351   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:18:34.779080   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:18:34.779090   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:18:34.796011   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:18:34.796020   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:18:34.808462   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:18:34.808473   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:18:34.842372   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:18:34.842382   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:18:34.876684   14863 logs.go:123] Gathering logs for coredns [f75f7cb569e4] ...
	I0415 05:18:34.876694   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f75f7cb569e4"
	I0415 05:18:34.888252   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:18:34.888263   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:18:34.911965   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:18:34.911973   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:18:34.916398   14863 logs.go:123] Gathering logs for coredns [fdf453805976] ...
	I0415 05:18:34.916405   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf453805976"
	I0415 05:18:34.929246   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:18:34.929257   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:18:34.944804   14863 logs.go:123] Gathering logs for coredns [c2f804a817f4] ...
	I0415 05:18:34.944816   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2f804a817f4"
	I0415 05:18:34.966583   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:18:34.966597   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:18:34.979829   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:18:34.979842   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:18:37.497426   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:18:42.499414   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:18:42.499539   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:18:42.515429   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:18:42.515521   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:18:42.528105   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:18:42.528186   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:18:42.539314   14863 logs.go:276] 4 containers: [fdf453805976 c2f804a817f4 7a43e07ab8d8 f75f7cb569e4]
	I0415 05:18:42.539390   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:18:42.550074   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:18:42.550154   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:18:42.560812   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:18:42.560892   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:18:42.571544   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:18:42.571614   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:18:42.581841   14863 logs.go:276] 0 containers: []
	W0415 05:18:42.581854   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:18:42.581917   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:18:42.602003   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:18:42.602020   14863 logs.go:123] Gathering logs for coredns [f75f7cb569e4] ...
	I0415 05:18:42.602025   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f75f7cb569e4"
	I0415 05:18:42.613983   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:18:42.613993   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:18:42.631803   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:18:42.631814   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:18:42.644321   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:18:42.644332   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:18:42.657816   14863 logs.go:123] Gathering logs for coredns [fdf453805976] ...
	I0415 05:18:42.657822   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf453805976"
	I0415 05:18:42.670045   14863 logs.go:123] Gathering logs for coredns [c2f804a817f4] ...
	I0415 05:18:42.670058   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2f804a817f4"
	I0415 05:18:42.681541   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:18:42.681552   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:18:42.696455   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:18:42.696465   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:18:42.708065   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:18:42.708075   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:18:42.724982   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:18:42.724991   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:18:42.736457   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:18:42.736465   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:18:42.761272   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:18:42.761282   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:18:42.796598   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:18:42.796607   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:18:42.835966   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:18:42.835976   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:18:42.850123   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:18:42.850133   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:18:45.364520   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:18:50.367173   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:18:50.367556   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:18:50.407603   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:18:50.407733   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:18:50.429164   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:18:50.429291   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:18:50.444807   14863 logs.go:276] 4 containers: [fdf453805976 c2f804a817f4 7a43e07ab8d8 f75f7cb569e4]
	I0415 05:18:50.444904   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:18:50.457154   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:18:50.457225   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:18:50.468556   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:18:50.468635   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:18:50.479385   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:18:50.479445   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:18:50.489308   14863 logs.go:276] 0 containers: []
	W0415 05:18:50.489320   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:18:50.489386   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:18:50.499500   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:18:50.499519   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:18:50.499525   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:18:50.510793   14863 logs.go:123] Gathering logs for coredns [fdf453805976] ...
	I0415 05:18:50.510802   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf453805976"
	I0415 05:18:50.523706   14863 logs.go:123] Gathering logs for coredns [f75f7cb569e4] ...
	I0415 05:18:50.523715   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f75f7cb569e4"
	I0415 05:18:50.535204   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:18:50.535217   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:18:50.548975   14863 logs.go:123] Gathering logs for coredns [c2f804a817f4] ...
	I0415 05:18:50.548987   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2f804a817f4"
	I0415 05:18:50.560463   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:18:50.560472   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:18:50.572929   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:18:50.572939   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:18:50.586169   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:18:50.586180   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:18:50.611337   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:18:50.611349   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:18:50.622830   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:18:50.622841   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:18:50.627367   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:18:50.627373   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:18:50.641641   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:18:50.641651   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:18:50.676463   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:18:50.676473   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:18:50.718673   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:18:50.718686   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:18:50.733677   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:18:50.733687   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:18:53.253183   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:18:58.254632   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:18:58.254718   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:18:58.265836   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:18:58.265901   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:18:58.277821   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:18:58.277861   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:18:58.289417   14863 logs.go:276] 4 containers: [fdf453805976 c2f804a817f4 7a43e07ab8d8 f75f7cb569e4]
	I0415 05:18:58.289492   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:18:58.300812   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:18:58.300885   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:18:58.312940   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:18:58.313018   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:18:58.327377   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:18:58.327441   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:18:58.339521   14863 logs.go:276] 0 containers: []
	W0415 05:18:58.339532   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:18:58.339591   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:18:58.350877   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:18:58.350895   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:18:58.350901   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:18:58.356330   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:18:58.356340   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:18:58.371133   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:18:58.371144   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:18:58.387823   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:18:58.387835   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:18:58.403533   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:18:58.403545   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:18:58.440751   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:18:58.440763   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:18:58.477151   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:18:58.477162   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:18:58.490680   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:18:58.490689   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:18:58.510980   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:18:58.510989   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:18:58.522735   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:18:58.522745   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:18:58.537566   14863 logs.go:123] Gathering logs for coredns [c2f804a817f4] ...
	I0415 05:18:58.537575   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2f804a817f4"
	I0415 05:18:58.549993   14863 logs.go:123] Gathering logs for coredns [f75f7cb569e4] ...
	I0415 05:18:58.550002   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f75f7cb569e4"
	I0415 05:18:58.563556   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:18:58.563570   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:18:58.577220   14863 logs.go:123] Gathering logs for coredns [fdf453805976] ...
	I0415 05:18:58.577234   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf453805976"
	I0415 05:18:58.594540   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:18:58.594554   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:19:01.122416   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:19:06.124516   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:19:06.124659   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:19:06.136649   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:19:06.136715   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:19:06.147526   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:19:06.147600   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:19:06.158336   14863 logs.go:276] 4 containers: [fdf453805976 c2f804a817f4 7a43e07ab8d8 f75f7cb569e4]
	I0415 05:19:06.158409   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:19:06.169363   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:19:06.169436   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:19:06.180390   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:19:06.180459   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:19:06.191373   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:19:06.191446   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:19:06.202678   14863 logs.go:276] 0 containers: []
	W0415 05:19:06.202688   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:19:06.202752   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:19:06.213455   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:19:06.213475   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:19:06.213480   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:19:06.225587   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:19:06.225598   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:19:06.229925   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:19:06.229931   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:19:06.241854   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:19:06.241865   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:19:06.254063   14863 logs.go:123] Gathering logs for coredns [f75f7cb569e4] ...
	I0415 05:19:06.254075   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f75f7cb569e4"
	I0415 05:19:06.267811   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:19:06.267823   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:19:06.283222   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:19:06.283231   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:19:06.318692   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:19:06.318700   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:19:06.332586   14863 logs.go:123] Gathering logs for coredns [c2f804a817f4] ...
	I0415 05:19:06.332595   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2f804a817f4"
	I0415 05:19:06.344298   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:19:06.344307   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:19:06.367390   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:19:06.367398   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:19:06.379143   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:19:06.379154   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:19:06.414873   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:19:06.414883   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:19:06.433879   14863 logs.go:123] Gathering logs for coredns [fdf453805976] ...
	I0415 05:19:06.433889   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf453805976"
	I0415 05:19:06.445751   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:19:06.445763   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:19:08.965105   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:19:13.967323   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:19:13.967706   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:19:13.999356   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:19:13.999506   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:19:14.018311   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:19:14.018420   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:19:14.033437   14863 logs.go:276] 4 containers: [fdf453805976 c2f804a817f4 7a43e07ab8d8 f75f7cb569e4]
	I0415 05:19:14.033514   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:19:14.045465   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:19:14.045540   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:19:14.055882   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:19:14.055963   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:19:14.069310   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:19:14.069384   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:19:14.079746   14863 logs.go:276] 0 containers: []
	W0415 05:19:14.079759   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:19:14.079824   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:19:14.093084   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:19:14.093106   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:19:14.093111   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:19:14.105028   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:19:14.105036   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:19:14.116637   14863 logs.go:123] Gathering logs for coredns [f75f7cb569e4] ...
	I0415 05:19:14.116646   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f75f7cb569e4"
	I0415 05:19:14.128250   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:19:14.128259   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:19:14.143403   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:19:14.143413   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:19:14.161830   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:19:14.161839   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:19:14.185659   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:19:14.185666   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:19:14.198098   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:19:14.198109   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:19:14.232508   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:19:14.232518   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:19:14.247103   14863 logs.go:123] Gathering logs for coredns [c2f804a817f4] ...
	I0415 05:19:14.247116   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2f804a817f4"
	I0415 05:19:14.258542   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:19:14.258551   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:19:14.270210   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:19:14.270219   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:19:14.303510   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:19:14.303523   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:19:14.308096   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:19:14.308101   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:19:14.322228   14863 logs.go:123] Gathering logs for coredns [fdf453805976] ...
	I0415 05:19:14.322236   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf453805976"
	I0415 05:19:16.835860   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:19:21.836309   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:19:21.836492   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 05:19:21.851648   14863 logs.go:276] 1 containers: [1248c859ecde]
	I0415 05:19:21.851747   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 05:19:21.865031   14863 logs.go:276] 1 containers: [bd96711d1d33]
	I0415 05:19:21.865103   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 05:19:21.875430   14863 logs.go:276] 4 containers: [434f13f598fe fdf453805976 c2f804a817f4 7a43e07ab8d8]
	I0415 05:19:21.875519   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 05:19:21.886208   14863 logs.go:276] 1 containers: [af936538913c]
	I0415 05:19:21.886284   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 05:19:21.896495   14863 logs.go:276] 1 containers: [6da66369a00a]
	I0415 05:19:21.896562   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 05:19:21.906745   14863 logs.go:276] 1 containers: [8c0e5649c5d3]
	I0415 05:19:21.906805   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 05:19:21.916656   14863 logs.go:276] 0 containers: []
	W0415 05:19:21.916668   14863 logs.go:278] No container was found matching "kindnet"
	I0415 05:19:21.916737   14863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 05:19:21.927168   14863 logs.go:276] 1 containers: [584f7e06679d]
	I0415 05:19:21.927187   14863 logs.go:123] Gathering logs for coredns [434f13f598fe] ...
	I0415 05:19:21.927192   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 434f13f598fe"
	I0415 05:19:21.938471   14863 logs.go:123] Gathering logs for container status ...
	I0415 05:19:21.938481   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 05:19:21.950361   14863 logs.go:123] Gathering logs for dmesg ...
	I0415 05:19:21.950372   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 05:19:21.954983   14863 logs.go:123] Gathering logs for kube-apiserver [1248c859ecde] ...
	I0415 05:19:21.954988   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1248c859ecde"
	I0415 05:19:21.975214   14863 logs.go:123] Gathering logs for coredns [fdf453805976] ...
	I0415 05:19:21.975226   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf453805976"
	I0415 05:19:21.987473   14863 logs.go:123] Gathering logs for coredns [7a43e07ab8d8] ...
	I0415 05:19:21.987484   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a43e07ab8d8"
	I0415 05:19:21.999314   14863 logs.go:123] Gathering logs for kube-controller-manager [8c0e5649c5d3] ...
	I0415 05:19:21.999324   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c0e5649c5d3"
	I0415 05:19:22.016212   14863 logs.go:123] Gathering logs for kubelet ...
	I0415 05:19:22.016221   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0415 05:19:22.049703   14863 logs.go:123] Gathering logs for describe nodes ...
	I0415 05:19:22.049711   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 05:19:22.084198   14863 logs.go:123] Gathering logs for etcd [bd96711d1d33] ...
	I0415 05:19:22.084208   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96711d1d33"
	I0415 05:19:22.098174   14863 logs.go:123] Gathering logs for storage-provisioner [584f7e06679d] ...
	I0415 05:19:22.098185   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 584f7e06679d"
	I0415 05:19:22.109927   14863 logs.go:123] Gathering logs for Docker ...
	I0415 05:19:22.109936   14863 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 05:19:22.132961   14863 logs.go:123] Gathering logs for coredns [c2f804a817f4] ...
	I0415 05:19:22.132968   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2f804a817f4"
	I0415 05:19:22.144607   14863 logs.go:123] Gathering logs for kube-scheduler [af936538913c] ...
	I0415 05:19:22.144616   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af936538913c"
	I0415 05:19:22.159519   14863 logs.go:123] Gathering logs for kube-proxy [6da66369a00a] ...
	I0415 05:19:22.159531   14863 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6da66369a00a"
	I0415 05:19:24.674027   14863 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0415 05:19:29.676302   14863 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0415 05:19:29.682405   14863 out.go:177] 
	W0415 05:19:29.689449   14863 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0415 05:19:29.689496   14863 out.go:239] * 
	* 
	W0415 05:19:29.691964   14863 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:19:29.707214   14863 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-413000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (563.99s)

                                                
                                    
x
+
TestPause/serial/Start (9.84s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-614000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-614000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.778470333s)

                                                
                                                
-- stdout --
	* [pause-614000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-614000" primary control-plane node in "pause-614000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-614000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-614000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-614000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-614000 -n pause-614000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-614000 -n pause-614000: exit status 7 (64.871125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-614000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (10.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-033000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-033000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (10.029797125s)

                                                
                                                
-- stdout --
	* [auto-033000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-033000" primary control-plane node in "auto-033000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-033000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:16:50.625668   15029 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:16:50.626018   15029 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:16:50.626023   15029 out.go:304] Setting ErrFile to fd 2...
	I0415 05:16:50.626026   15029 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:16:50.626208   15029 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:16:50.627741   15029 out.go:298] Setting JSON to false
	I0415 05:16:50.644273   15029 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8181,"bootTime":1713175229,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:16:50.644344   15029 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:16:50.650242   15029 out.go:177] * [auto-033000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:16:50.658248   15029 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:16:50.662206   15029 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:16:50.658297   15029 notify.go:220] Checking for updates...
	I0415 05:16:50.668201   15029 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:16:50.671223   15029 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:16:50.672684   15029 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:16:50.676169   15029 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:16:50.679569   15029 config.go:182] Loaded profile config "multinode-671000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:16:50.679639   15029 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:16:50.679686   15029 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:16:50.684029   15029 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 05:16:50.691178   15029 start.go:297] selected driver: qemu2
	I0415 05:16:50.691184   15029 start.go:901] validating driver "qemu2" against <nil>
	I0415 05:16:50.691190   15029 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:16:50.693361   15029 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 05:16:50.696220   15029 out.go:177] * Automatically selected the socket_vmnet network
	I0415 05:16:50.699300   15029 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:16:50.699332   15029 cni.go:84] Creating CNI manager for ""
	I0415 05:16:50.699339   15029 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:16:50.699343   15029 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 05:16:50.699377   15029 start.go:340] cluster config:
	{Name:auto-033000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:auto-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:16:50.703889   15029 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:16:50.710094   15029 out.go:177] * Starting "auto-033000" primary control-plane node in "auto-033000" cluster
	I0415 05:16:50.714137   15029 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:16:50.714151   15029 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 05:16:50.714157   15029 cache.go:56] Caching tarball of preloaded images
	I0415 05:16:50.714213   15029 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:16:50.714218   15029 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:16:50.714270   15029 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/auto-033000/config.json ...
	I0415 05:16:50.714290   15029 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/auto-033000/config.json: {Name:mk14652a8c47aaa3f129d106ec8a1f7d6480a145 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:16:50.714593   15029 start.go:360] acquireMachinesLock for auto-033000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:16:50.714626   15029 start.go:364] duration metric: took 28.083µs to acquireMachinesLock for "auto-033000"
	I0415 05:16:50.714642   15029 start.go:93] Provisioning new machine with config: &{Name:auto-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:auto-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:16:50.714668   15029 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:16:50.722188   15029 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0415 05:16:50.737130   15029 start.go:159] libmachine.API.Create for "auto-033000" (driver="qemu2")
	I0415 05:16:50.737160   15029 client.go:168] LocalClient.Create starting
	I0415 05:16:50.737238   15029 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:16:50.737285   15029 main.go:141] libmachine: Decoding PEM data...
	I0415 05:16:50.737299   15029 main.go:141] libmachine: Parsing certificate...
	I0415 05:16:50.737346   15029 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:16:50.737385   15029 main.go:141] libmachine: Decoding PEM data...
	I0415 05:16:50.737393   15029 main.go:141] libmachine: Parsing certificate...
	I0415 05:16:50.737705   15029 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:16:50.869124   15029 main.go:141] libmachine: Creating SSH key...
	I0415 05:16:51.104470   15029 main.go:141] libmachine: Creating Disk image...
	I0415 05:16:51.104480   15029 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:16:51.104707   15029 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/auto-033000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/auto-033000/disk.qcow2
	I0415 05:16:51.118380   15029 main.go:141] libmachine: STDOUT: 
	I0415 05:16:51.118406   15029 main.go:141] libmachine: STDERR: 
	I0415 05:16:51.118472   15029 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/auto-033000/disk.qcow2 +20000M
	I0415 05:16:51.130426   15029 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:16:51.130449   15029 main.go:141] libmachine: STDERR: 
	I0415 05:16:51.130466   15029 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/auto-033000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/auto-033000/disk.qcow2
	I0415 05:16:51.130470   15029 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:16:51.130500   15029 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/auto-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/auto-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/auto-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:ed:50:8a:6a:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/auto-033000/disk.qcow2
	I0415 05:16:51.132294   15029 main.go:141] libmachine: STDOUT: 
	I0415 05:16:51.132309   15029 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:16:51.132326   15029 client.go:171] duration metric: took 395.177792ms to LocalClient.Create
	I0415 05:16:53.134360   15029 start.go:128] duration metric: took 2.419778333s to createHost
	I0415 05:16:53.134408   15029 start.go:83] releasing machines lock for "auto-033000", held for 2.419876958s
	W0415 05:16:53.134438   15029 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:16:53.151525   15029 out.go:177] * Deleting "auto-033000" in qemu2 ...
	W0415 05:16:53.163040   15029 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:16:53.163058   15029 start.go:728] Will try again in 5 seconds ...
	I0415 05:16:58.165223   15029 start.go:360] acquireMachinesLock for auto-033000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:16:58.165794   15029 start.go:364] duration metric: took 412.375µs to acquireMachinesLock for "auto-033000"
	I0415 05:16:58.165911   15029 start.go:93] Provisioning new machine with config: &{Name:auto-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:auto-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:16:58.166196   15029 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:16:58.172708   15029 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0415 05:16:58.222069   15029 start.go:159] libmachine.API.Create for "auto-033000" (driver="qemu2")
	I0415 05:16:58.222124   15029 client.go:168] LocalClient.Create starting
	I0415 05:16:58.222259   15029 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:16:58.222335   15029 main.go:141] libmachine: Decoding PEM data...
	I0415 05:16:58.222354   15029 main.go:141] libmachine: Parsing certificate...
	I0415 05:16:58.222411   15029 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:16:58.222454   15029 main.go:141] libmachine: Decoding PEM data...
	I0415 05:16:58.222466   15029 main.go:141] libmachine: Parsing certificate...
	I0415 05:16:58.222974   15029 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:16:58.365459   15029 main.go:141] libmachine: Creating SSH key...
	I0415 05:16:58.562766   15029 main.go:141] libmachine: Creating Disk image...
	I0415 05:16:58.562775   15029 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:16:58.563004   15029 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/auto-033000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/auto-033000/disk.qcow2
	I0415 05:16:58.576200   15029 main.go:141] libmachine: STDOUT: 
	I0415 05:16:58.576223   15029 main.go:141] libmachine: STDERR: 
	I0415 05:16:58.576299   15029 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/auto-033000/disk.qcow2 +20000M
	I0415 05:16:58.587533   15029 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:16:58.587550   15029 main.go:141] libmachine: STDERR: 
	I0415 05:16:58.587563   15029 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/auto-033000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/auto-033000/disk.qcow2
	I0415 05:16:58.587568   15029 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:16:58.587602   15029 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/auto-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/auto-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/auto-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:a8:5f:b0:a3:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/auto-033000/disk.qcow2
	I0415 05:16:58.589391   15029 main.go:141] libmachine: STDOUT: 
	I0415 05:16:58.589411   15029 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:16:58.589424   15029 client.go:171] duration metric: took 367.310042ms to LocalClient.Create
	I0415 05:17:00.590946   15029 start.go:128] duration metric: took 2.424815875s to createHost
	I0415 05:17:00.590966   15029 start.go:83] releasing machines lock for "auto-033000", held for 2.425249958s
	W0415 05:17:00.591030   15029 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:17:00.604246   15029 out.go:177] 
	W0415 05:17:00.608240   15029 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:17:00.608250   15029 out.go:239] * 
	* 
	W0415 05:17:00.608774   15029 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:17:00.617292   15029 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (10.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-033000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-033000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.727623s)

                                                
                                                
-- stdout --
	* [kindnet-033000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-033000" primary control-plane node in "kindnet-033000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-033000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:17:02.954879   15139 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:17:02.954991   15139 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:17:02.954995   15139 out.go:304] Setting ErrFile to fd 2...
	I0415 05:17:02.954997   15139 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:17:02.955113   15139 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:17:02.956253   15139 out.go:298] Setting JSON to false
	I0415 05:17:02.972597   15139 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8193,"bootTime":1713175229,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:17:02.972658   15139 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:17:02.978450   15139 out.go:177] * [kindnet-033000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:17:02.985365   15139 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:17:02.989442   15139 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:17:02.985377   15139 notify.go:220] Checking for updates...
	I0415 05:17:02.995425   15139 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:17:02.998419   15139 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:17:03.001451   15139 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:17:03.002935   15139 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:17:03.006743   15139 config.go:182] Loaded profile config "multinode-671000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:17:03.006818   15139 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:17:03.006857   15139 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:17:03.011442   15139 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 05:17:03.017430   15139 start.go:297] selected driver: qemu2
	I0415 05:17:03.017437   15139 start.go:901] validating driver "qemu2" against <nil>
	I0415 05:17:03.017442   15139 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:17:03.019658   15139 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 05:17:03.022412   15139 out.go:177] * Automatically selected the socket_vmnet network
	I0415 05:17:03.026299   15139 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:17:03.026335   15139 cni.go:84] Creating CNI manager for "kindnet"
	I0415 05:17:03.026342   15139 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0415 05:17:03.026376   15139 start.go:340] cluster config:
	{Name:kindnet-033000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kindnet-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:17:03.030708   15139 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:17:03.037458   15139 out.go:177] * Starting "kindnet-033000" primary control-plane node in "kindnet-033000" cluster
	I0415 05:17:03.041331   15139 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:17:03.041352   15139 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 05:17:03.041357   15139 cache.go:56] Caching tarball of preloaded images
	I0415 05:17:03.041414   15139 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:17:03.041419   15139 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:17:03.041474   15139 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/kindnet-033000/config.json ...
	I0415 05:17:03.041486   15139 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/kindnet-033000/config.json: {Name:mk391c2c1fa54bedf7235d62da13a7fc2a78eb20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:17:03.041697   15139 start.go:360] acquireMachinesLock for kindnet-033000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:17:03.041733   15139 start.go:364] duration metric: took 30.125µs to acquireMachinesLock for "kindnet-033000"
	I0415 05:17:03.041748   15139 start.go:93] Provisioning new machine with config: &{Name:kindnet-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kindnet-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:17:03.041782   15139 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:17:03.045487   15139 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0415 05:17:03.060850   15139 start.go:159] libmachine.API.Create for "kindnet-033000" (driver="qemu2")
	I0415 05:17:03.060873   15139 client.go:168] LocalClient.Create starting
	I0415 05:17:03.060951   15139 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:17:03.060980   15139 main.go:141] libmachine: Decoding PEM data...
	I0415 05:17:03.060989   15139 main.go:141] libmachine: Parsing certificate...
	I0415 05:17:03.061023   15139 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:17:03.061046   15139 main.go:141] libmachine: Decoding PEM data...
	I0415 05:17:03.061054   15139 main.go:141] libmachine: Parsing certificate...
	I0415 05:17:03.061408   15139 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:17:03.192628   15139 main.go:141] libmachine: Creating SSH key...
	I0415 05:17:03.255245   15139 main.go:141] libmachine: Creating Disk image...
	I0415 05:17:03.255252   15139 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:17:03.255421   15139 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kindnet-033000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kindnet-033000/disk.qcow2
	I0415 05:17:03.267978   15139 main.go:141] libmachine: STDOUT: 
	I0415 05:17:03.267996   15139 main.go:141] libmachine: STDERR: 
	I0415 05:17:03.268050   15139 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kindnet-033000/disk.qcow2 +20000M
	I0415 05:17:03.279164   15139 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:17:03.279181   15139 main.go:141] libmachine: STDERR: 
	I0415 05:17:03.279199   15139 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kindnet-033000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kindnet-033000/disk.qcow2
	I0415 05:17:03.279203   15139 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:17:03.279237   15139 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kindnet-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kindnet-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kindnet-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:fb:1e:53:8d:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kindnet-033000/disk.qcow2
	I0415 05:17:03.281072   15139 main.go:141] libmachine: STDOUT: 
	I0415 05:17:03.281100   15139 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:17:03.281123   15139 client.go:171] duration metric: took 220.252542ms to LocalClient.Create
	I0415 05:17:05.283141   15139 start.go:128] duration metric: took 2.241441292s to createHost
	I0415 05:17:05.283169   15139 start.go:83] releasing machines lock for "kindnet-033000", held for 2.241524625s
	W0415 05:17:05.283203   15139 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:17:05.289074   15139 out.go:177] * Deleting "kindnet-033000" in qemu2 ...
	W0415 05:17:05.299942   15139 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:17:05.299954   15139 start.go:728] Will try again in 5 seconds ...
	I0415 05:17:10.301961   15139 start.go:360] acquireMachinesLock for kindnet-033000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:17:10.302317   15139 start.go:364] duration metric: took 277.584µs to acquireMachinesLock for "kindnet-033000"
	I0415 05:17:10.302390   15139 start.go:93] Provisioning new machine with config: &{Name:kindnet-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kindnet-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:17:10.302527   15139 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:17:10.311860   15139 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0415 05:17:10.344264   15139 start.go:159] libmachine.API.Create for "kindnet-033000" (driver="qemu2")
	I0415 05:17:10.344302   15139 client.go:168] LocalClient.Create starting
	I0415 05:17:10.344420   15139 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:17:10.344473   15139 main.go:141] libmachine: Decoding PEM data...
	I0415 05:17:10.344488   15139 main.go:141] libmachine: Parsing certificate...
	I0415 05:17:10.344550   15139 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:17:10.344589   15139 main.go:141] libmachine: Decoding PEM data...
	I0415 05:17:10.344605   15139 main.go:141] libmachine: Parsing certificate...
	I0415 05:17:10.345293   15139 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:17:10.486045   15139 main.go:141] libmachine: Creating SSH key...
	I0415 05:17:10.576335   15139 main.go:141] libmachine: Creating Disk image...
	I0415 05:17:10.576343   15139 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:17:10.576526   15139 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kindnet-033000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kindnet-033000/disk.qcow2
	I0415 05:17:10.589137   15139 main.go:141] libmachine: STDOUT: 
	I0415 05:17:10.589155   15139 main.go:141] libmachine: STDERR: 
	I0415 05:17:10.589208   15139 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kindnet-033000/disk.qcow2 +20000M
	I0415 05:17:10.600855   15139 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:17:10.600877   15139 main.go:141] libmachine: STDERR: 
	I0415 05:17:10.600887   15139 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kindnet-033000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kindnet-033000/disk.qcow2
	I0415 05:17:10.600891   15139 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:17:10.600926   15139 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kindnet-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kindnet-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kindnet-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:ba:da:a6:be:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kindnet-033000/disk.qcow2
	I0415 05:17:10.602823   15139 main.go:141] libmachine: STDOUT: 
	I0415 05:17:10.602839   15139 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:17:10.602852   15139 client.go:171] duration metric: took 258.556584ms to LocalClient.Create
	I0415 05:17:12.604975   15139 start.go:128] duration metric: took 2.302516042s to createHost
	I0415 05:17:12.605019   15139 start.go:83] releasing machines lock for "kindnet-033000", held for 2.302775666s
	W0415 05:17:12.605180   15139 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:17:12.625622   15139 out.go:177] 
	W0415 05:17:12.629553   15139 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:17:12.629564   15139 out.go:239] * 
	* 
	W0415 05:17:12.630562   15139 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:17:12.642556   15139 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-033000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-033000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.76240325s)

                                                
                                                
-- stdout --
	* [flannel-033000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-033000" primary control-plane node in "flannel-033000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-033000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:17:15.004077   15256 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:17:15.004460   15256 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:17:15.004465   15256 out.go:304] Setting ErrFile to fd 2...
	I0415 05:17:15.004468   15256 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:17:15.004663   15256 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:17:15.006143   15256 out.go:298] Setting JSON to false
	I0415 05:17:15.022717   15256 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8206,"bootTime":1713175229,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:17:15.022782   15256 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:17:15.028871   15256 out.go:177] * [flannel-033000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:17:15.040884   15256 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:17:15.035911   15256 notify.go:220] Checking for updates...
	I0415 05:17:15.048814   15256 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:17:15.056837   15256 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:17:15.063713   15256 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:17:15.071872   15256 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:17:15.074838   15256 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:17:15.079267   15256 config.go:182] Loaded profile config "multinode-671000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:17:15.079348   15256 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:17:15.079411   15256 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:17:15.083806   15256 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 05:17:15.091809   15256 start.go:297] selected driver: qemu2
	I0415 05:17:15.091816   15256 start.go:901] validating driver "qemu2" against <nil>
	I0415 05:17:15.091822   15256 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:17:15.094226   15256 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 05:17:15.097863   15256 out.go:177] * Automatically selected the socket_vmnet network
	I0415 05:17:15.101887   15256 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:17:15.101918   15256 cni.go:84] Creating CNI manager for "flannel"
	I0415 05:17:15.101922   15256 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0415 05:17:15.101953   15256 start.go:340] cluster config:
	{Name:flannel-033000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:flannel-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:17:15.106286   15256 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:17:15.114792   15256 out.go:177] * Starting "flannel-033000" primary control-plane node in "flannel-033000" cluster
	I0415 05:17:15.118857   15256 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:17:15.118884   15256 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 05:17:15.118895   15256 cache.go:56] Caching tarball of preloaded images
	I0415 05:17:15.118972   15256 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:17:15.118977   15256 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:17:15.119044   15256 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/flannel-033000/config.json ...
	I0415 05:17:15.119062   15256 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/flannel-033000/config.json: {Name:mka36651d6327bb93d0271b228951533e2021b12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:17:15.119301   15256 start.go:360] acquireMachinesLock for flannel-033000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:17:15.119334   15256 start.go:364] duration metric: took 27.708µs to acquireMachinesLock for "flannel-033000"
	I0415 05:17:15.119351   15256 start.go:93] Provisioning new machine with config: &{Name:flannel-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:flannel-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:17:15.119379   15256 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:17:15.126794   15256 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0415 05:17:15.142400   15256 start.go:159] libmachine.API.Create for "flannel-033000" (driver="qemu2")
	I0415 05:17:15.142426   15256 client.go:168] LocalClient.Create starting
	I0415 05:17:15.142498   15256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:17:15.142527   15256 main.go:141] libmachine: Decoding PEM data...
	I0415 05:17:15.142538   15256 main.go:141] libmachine: Parsing certificate...
	I0415 05:17:15.142574   15256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:17:15.142600   15256 main.go:141] libmachine: Decoding PEM data...
	I0415 05:17:15.142609   15256 main.go:141] libmachine: Parsing certificate...
	I0415 05:17:15.142929   15256 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:17:15.277619   15256 main.go:141] libmachine: Creating SSH key...
	I0415 05:17:15.341907   15256 main.go:141] libmachine: Creating Disk image...
	I0415 05:17:15.341915   15256 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:17:15.342099   15256 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/flannel-033000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/flannel-033000/disk.qcow2
	I0415 05:17:15.355041   15256 main.go:141] libmachine: STDOUT: 
	I0415 05:17:15.355059   15256 main.go:141] libmachine: STDERR: 
	I0415 05:17:15.355109   15256 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/flannel-033000/disk.qcow2 +20000M
	I0415 05:17:15.366603   15256 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:17:15.366625   15256 main.go:141] libmachine: STDERR: 
	I0415 05:17:15.366637   15256 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/flannel-033000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/flannel-033000/disk.qcow2
	I0415 05:17:15.366643   15256 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:17:15.366675   15256 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/flannel-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/flannel-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/flannel-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:e0:14:be:e9:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/flannel-033000/disk.qcow2
	I0415 05:17:15.368431   15256 main.go:141] libmachine: STDOUT: 
	I0415 05:17:15.368446   15256 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:17:15.368464   15256 client.go:171] duration metric: took 226.042167ms to LocalClient.Create
	I0415 05:17:17.370541   15256 start.go:128] duration metric: took 2.251237459s to createHost
	I0415 05:17:17.370583   15256 start.go:83] releasing machines lock for "flannel-033000", held for 2.251335667s
	W0415 05:17:17.370614   15256 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:17:17.381906   15256 out.go:177] * Deleting "flannel-033000" in qemu2 ...
	W0415 05:17:17.396404   15256 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:17:17.396423   15256 start.go:728] Will try again in 5 seconds ...
	I0415 05:17:22.398423   15256 start.go:360] acquireMachinesLock for flannel-033000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:17:22.398839   15256 start.go:364] duration metric: took 329.875µs to acquireMachinesLock for "flannel-033000"
	I0415 05:17:22.398953   15256 start.go:93] Provisioning new machine with config: &{Name:flannel-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:flannel-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:17:22.399303   15256 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:17:22.404078   15256 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0415 05:17:22.451945   15256 start.go:159] libmachine.API.Create for "flannel-033000" (driver="qemu2")
	I0415 05:17:22.451991   15256 client.go:168] LocalClient.Create starting
	I0415 05:17:22.452129   15256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:17:22.452198   15256 main.go:141] libmachine: Decoding PEM data...
	I0415 05:17:22.452213   15256 main.go:141] libmachine: Parsing certificate...
	I0415 05:17:22.452278   15256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:17:22.452325   15256 main.go:141] libmachine: Decoding PEM data...
	I0415 05:17:22.452340   15256 main.go:141] libmachine: Parsing certificate...
	I0415 05:17:22.452854   15256 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:17:22.596242   15256 main.go:141] libmachine: Creating SSH key...
	I0415 05:17:22.671970   15256 main.go:141] libmachine: Creating Disk image...
	I0415 05:17:22.671977   15256 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:17:22.672157   15256 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/flannel-033000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/flannel-033000/disk.qcow2
	I0415 05:17:22.684925   15256 main.go:141] libmachine: STDOUT: 
	I0415 05:17:22.684945   15256 main.go:141] libmachine: STDERR: 
	I0415 05:17:22.685009   15256 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/flannel-033000/disk.qcow2 +20000M
	I0415 05:17:22.695766   15256 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:17:22.695782   15256 main.go:141] libmachine: STDERR: 
	I0415 05:17:22.695791   15256 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/flannel-033000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/flannel-033000/disk.qcow2
	I0415 05:17:22.695796   15256 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:17:22.695839   15256 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/flannel-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/flannel-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/flannel-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:78:5e:9d:ed:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/flannel-033000/disk.qcow2
	I0415 05:17:22.697541   15256 main.go:141] libmachine: STDOUT: 
	I0415 05:17:22.697558   15256 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:17:22.697569   15256 client.go:171] duration metric: took 245.584166ms to LocalClient.Create
	I0415 05:17:24.699581   15256 start.go:128] duration metric: took 2.300344917s to createHost
	I0415 05:17:24.699628   15256 start.go:83] releasing machines lock for "flannel-033000", held for 2.300856375s
	W0415 05:17:24.699809   15256 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:17:24.708214   15256 out.go:177] 
	W0415 05:17:24.716284   15256 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:17:24.716302   15256 out.go:239] * 
	* 
	W0415 05:17:24.717374   15256 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:17:24.727167   15256 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-033000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-033000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.801382125s)

                                                
                                                
-- stdout --
	* [enable-default-cni-033000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-033000" primary control-plane node in "enable-default-cni-033000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-033000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:17:27.172068   15374 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:17:27.172417   15374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:17:27.172421   15374 out.go:304] Setting ErrFile to fd 2...
	I0415 05:17:27.172423   15374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:17:27.172600   15374 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:17:27.174080   15374 out.go:298] Setting JSON to false
	I0415 05:17:27.190669   15374 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8218,"bootTime":1713175229,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:17:27.190787   15374 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:17:27.197847   15374 out.go:177] * [enable-default-cni-033000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:17:27.210769   15374 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:17:27.205851   15374 notify.go:220] Checking for updates...
	I0415 05:17:27.220698   15374 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:17:27.227757   15374 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:17:27.233771   15374 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:17:27.237726   15374 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:17:27.244701   15374 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:17:27.248173   15374 config.go:182] Loaded profile config "multinode-671000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:17:27.248241   15374 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:17:27.248288   15374 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:17:27.252728   15374 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 05:17:27.259708   15374 start.go:297] selected driver: qemu2
	I0415 05:17:27.259715   15374 start.go:901] validating driver "qemu2" against <nil>
	I0415 05:17:27.259720   15374 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:17:27.262168   15374 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 05:17:27.266769   15374 out.go:177] * Automatically selected the socket_vmnet network
	E0415 05:17:27.270878   15374 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0415 05:17:27.270900   15374 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:17:27.270952   15374 cni.go:84] Creating CNI manager for "bridge"
	I0415 05:17:27.270964   15374 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 05:17:27.270997   15374 start.go:340] cluster config:
	{Name:enable-default-cni-033000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:17:27.275671   15374 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:17:27.282767   15374 out.go:177] * Starting "enable-default-cni-033000" primary control-plane node in "enable-default-cni-033000" cluster
	I0415 05:17:27.286568   15374 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:17:27.286597   15374 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 05:17:27.286606   15374 cache.go:56] Caching tarball of preloaded images
	I0415 05:17:27.286678   15374 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:17:27.286684   15374 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:17:27.286765   15374 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/enable-default-cni-033000/config.json ...
	I0415 05:17:27.286782   15374 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/enable-default-cni-033000/config.json: {Name:mk2a66e7689d4b76a9e415d0013e188f9ae12913 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:17:27.287129   15374 start.go:360] acquireMachinesLock for enable-default-cni-033000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:17:27.287176   15374 start.go:364] duration metric: took 36.375µs to acquireMachinesLock for "enable-default-cni-033000"
	I0415 05:17:27.287190   15374 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:17:27.287220   15374 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:17:27.294663   15374 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0415 05:17:27.311346   15374 start.go:159] libmachine.API.Create for "enable-default-cni-033000" (driver="qemu2")
	I0415 05:17:27.311394   15374 client.go:168] LocalClient.Create starting
	I0415 05:17:27.311462   15374 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:17:27.311494   15374 main.go:141] libmachine: Decoding PEM data...
	I0415 05:17:27.311504   15374 main.go:141] libmachine: Parsing certificate...
	I0415 05:17:27.311538   15374 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:17:27.311560   15374 main.go:141] libmachine: Decoding PEM data...
	I0415 05:17:27.311568   15374 main.go:141] libmachine: Parsing certificate...
	I0415 05:17:27.311880   15374 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:17:27.445384   15374 main.go:141] libmachine: Creating SSH key...
	I0415 05:17:27.537861   15374 main.go:141] libmachine: Creating Disk image...
	I0415 05:17:27.537870   15374 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:17:27.538056   15374 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/enable-default-cni-033000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/enable-default-cni-033000/disk.qcow2
	I0415 05:17:27.550593   15374 main.go:141] libmachine: STDOUT: 
	I0415 05:17:27.550616   15374 main.go:141] libmachine: STDERR: 
	I0415 05:17:27.550684   15374 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/enable-default-cni-033000/disk.qcow2 +20000M
	I0415 05:17:27.561697   15374 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:17:27.561715   15374 main.go:141] libmachine: STDERR: 
	I0415 05:17:27.561730   15374 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/enable-default-cni-033000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/enable-default-cni-033000/disk.qcow2
	I0415 05:17:27.561734   15374 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:17:27.561759   15374 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/enable-default-cni-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/enable-default-cni-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/enable-default-cni-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:62:2a:d3:31:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/enable-default-cni-033000/disk.qcow2
	I0415 05:17:27.563374   15374 main.go:141] libmachine: STDOUT: 
	I0415 05:17:27.563397   15374 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:17:27.563415   15374 client.go:171] duration metric: took 252.025958ms to LocalClient.Create
	I0415 05:17:29.565454   15374 start.go:128] duration metric: took 2.278318667s to createHost
	I0415 05:17:29.565486   15374 start.go:83] releasing machines lock for "enable-default-cni-033000", held for 2.278397417s
	W0415 05:17:29.565504   15374 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:17:29.574451   15374 out.go:177] * Deleting "enable-default-cni-033000" in qemu2 ...
	W0415 05:17:29.587422   15374 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:17:29.587431   15374 start.go:728] Will try again in 5 seconds ...
	I0415 05:17:34.589430   15374 start.go:360] acquireMachinesLock for enable-default-cni-033000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:17:34.589830   15374 start.go:364] duration metric: took 293.958µs to acquireMachinesLock for "enable-default-cni-033000"
	I0415 05:17:34.589915   15374 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:17:34.590124   15374 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:17:34.599619   15374 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0415 05:17:34.640600   15374 start.go:159] libmachine.API.Create for "enable-default-cni-033000" (driver="qemu2")
	I0415 05:17:34.640641   15374 client.go:168] LocalClient.Create starting
	I0415 05:17:34.640761   15374 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:17:34.640847   15374 main.go:141] libmachine: Decoding PEM data...
	I0415 05:17:34.640863   15374 main.go:141] libmachine: Parsing certificate...
	I0415 05:17:34.640913   15374 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:17:34.640955   15374 main.go:141] libmachine: Decoding PEM data...
	I0415 05:17:34.640968   15374 main.go:141] libmachine: Parsing certificate...
	I0415 05:17:34.641508   15374 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:17:34.783267   15374 main.go:141] libmachine: Creating SSH key...
	I0415 05:17:34.880890   15374 main.go:141] libmachine: Creating Disk image...
	I0415 05:17:34.880896   15374 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:17:34.881061   15374 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/enable-default-cni-033000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/enable-default-cni-033000/disk.qcow2
	I0415 05:17:34.893842   15374 main.go:141] libmachine: STDOUT: 
	I0415 05:17:34.893861   15374 main.go:141] libmachine: STDERR: 
	I0415 05:17:34.893919   15374 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/enable-default-cni-033000/disk.qcow2 +20000M
	I0415 05:17:34.905328   15374 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:17:34.905342   15374 main.go:141] libmachine: STDERR: 
	I0415 05:17:34.905354   15374 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/enable-default-cni-033000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/enable-default-cni-033000/disk.qcow2
	I0415 05:17:34.905365   15374 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:17:34.905397   15374 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/enable-default-cni-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/enable-default-cni-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/enable-default-cni-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:79:d0:90:33:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/enable-default-cni-033000/disk.qcow2
	I0415 05:17:34.907036   15374 main.go:141] libmachine: STDOUT: 
	I0415 05:17:34.907051   15374 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:17:34.907061   15374 client.go:171] duration metric: took 266.427083ms to LocalClient.Create
	I0415 05:17:36.909106   15374 start.go:128] duration metric: took 2.319044584s to createHost
	I0415 05:17:36.909146   15374 start.go:83] releasing machines lock for "enable-default-cni-033000", held for 2.319396083s
	W0415 05:17:36.909307   15374 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:17:36.917659   15374 out.go:177] 
	W0415 05:17:36.922651   15374 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:17:36.922665   15374 out.go:239] * 
	* 
	W0415 05:17:36.923532   15374 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:17:36.934585   15374 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-033000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-033000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.831792458s)

                                                
                                                
-- stdout --
	* [bridge-033000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-033000" primary control-plane node in "bridge-033000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-033000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:17:39.146330   15500 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:17:39.146454   15500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:17:39.146458   15500 out.go:304] Setting ErrFile to fd 2...
	I0415 05:17:39.146461   15500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:17:39.146586   15500 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:17:39.147629   15500 out.go:298] Setting JSON to false
	I0415 05:17:39.164009   15500 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8230,"bootTime":1713175229,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:17:39.164073   15500 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:17:39.168269   15500 out.go:177] * [bridge-033000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:17:39.179111   15500 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:17:39.175211   15500 notify.go:220] Checking for updates...
	I0415 05:17:39.184158   15500 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:17:39.192150   15500 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:17:39.195217   15500 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:17:39.203059   15500 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:17:39.211201   15500 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:17:39.215421   15500 config.go:182] Loaded profile config "multinode-671000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:17:39.215493   15500 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:17:39.215545   15500 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:17:39.219103   15500 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 05:17:39.226195   15500 start.go:297] selected driver: qemu2
	I0415 05:17:39.226202   15500 start.go:901] validating driver "qemu2" against <nil>
	I0415 05:17:39.226208   15500 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:17:39.228669   15500 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 05:17:39.233222   15500 out.go:177] * Automatically selected the socket_vmnet network
	I0415 05:17:39.237108   15500 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:17:39.237155   15500 cni.go:84] Creating CNI manager for "bridge"
	I0415 05:17:39.237159   15500 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 05:17:39.237198   15500 start.go:340] cluster config:
	{Name:bridge-033000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:bridge-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:17:39.241736   15500 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:17:39.250163   15500 out.go:177] * Starting "bridge-033000" primary control-plane node in "bridge-033000" cluster
	I0415 05:17:39.253142   15500 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:17:39.253164   15500 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 05:17:39.253173   15500 cache.go:56] Caching tarball of preloaded images
	I0415 05:17:39.253234   15500 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:17:39.253239   15500 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:17:39.253300   15500 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/bridge-033000/config.json ...
	I0415 05:17:39.253312   15500 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/bridge-033000/config.json: {Name:mk7e8c11f27712da348fa58165a6129dc09a8ff5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:17:39.253657   15500 start.go:360] acquireMachinesLock for bridge-033000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:17:39.253691   15500 start.go:364] duration metric: took 27.791µs to acquireMachinesLock for "bridge-033000"
	I0415 05:17:39.253701   15500 start.go:93] Provisioning new machine with config: &{Name:bridge-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:bridge-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:17:39.253732   15500 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:17:39.261137   15500 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0415 05:17:39.278117   15500 start.go:159] libmachine.API.Create for "bridge-033000" (driver="qemu2")
	I0415 05:17:39.278146   15500 client.go:168] LocalClient.Create starting
	I0415 05:17:39.278228   15500 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:17:39.278260   15500 main.go:141] libmachine: Decoding PEM data...
	I0415 05:17:39.278272   15500 main.go:141] libmachine: Parsing certificate...
	I0415 05:17:39.278310   15500 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:17:39.278335   15500 main.go:141] libmachine: Decoding PEM data...
	I0415 05:17:39.278345   15500 main.go:141] libmachine: Parsing certificate...
	I0415 05:17:39.278689   15500 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:17:39.411964   15500 main.go:141] libmachine: Creating SSH key...
	I0415 05:17:39.494323   15500 main.go:141] libmachine: Creating Disk image...
	I0415 05:17:39.494329   15500 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:17:39.494502   15500 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/bridge-033000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/bridge-033000/disk.qcow2
	I0415 05:17:39.507198   15500 main.go:141] libmachine: STDOUT: 
	I0415 05:17:39.507217   15500 main.go:141] libmachine: STDERR: 
	I0415 05:17:39.507269   15500 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/bridge-033000/disk.qcow2 +20000M
	I0415 05:17:39.518719   15500 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:17:39.518743   15500 main.go:141] libmachine: STDERR: 
	I0415 05:17:39.518757   15500 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/bridge-033000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/bridge-033000/disk.qcow2
	I0415 05:17:39.518764   15500 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:17:39.518793   15500 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/bridge-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/bridge-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/bridge-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:a5:52:4a:b8:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/bridge-033000/disk.qcow2
	I0415 05:17:39.520421   15500 main.go:141] libmachine: STDOUT: 
	I0415 05:17:39.520436   15500 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:17:39.520458   15500 client.go:171] duration metric: took 242.317334ms to LocalClient.Create
	I0415 05:17:41.522573   15500 start.go:128] duration metric: took 2.268907041s to createHost
	I0415 05:17:41.522639   15500 start.go:83] releasing machines lock for "bridge-033000", held for 2.26903325s
	W0415 05:17:41.522735   15500 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:17:41.540001   15500 out.go:177] * Deleting "bridge-033000" in qemu2 ...
	W0415 05:17:41.568109   15500 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:17:41.568149   15500 start.go:728] Will try again in 5 seconds ...
	I0415 05:17:46.570125   15500 start.go:360] acquireMachinesLock for bridge-033000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:17:46.570638   15500 start.go:364] duration metric: took 352.958µs to acquireMachinesLock for "bridge-033000"
	I0415 05:17:46.570734   15500 start.go:93] Provisioning new machine with config: &{Name:bridge-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:bridge-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:17:46.571082   15500 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:17:46.580800   15500 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0415 05:17:46.631380   15500 start.go:159] libmachine.API.Create for "bridge-033000" (driver="qemu2")
	I0415 05:17:46.631450   15500 client.go:168] LocalClient.Create starting
	I0415 05:17:46.631589   15500 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:17:46.631657   15500 main.go:141] libmachine: Decoding PEM data...
	I0415 05:17:46.631673   15500 main.go:141] libmachine: Parsing certificate...
	I0415 05:17:46.631756   15500 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:17:46.631801   15500 main.go:141] libmachine: Decoding PEM data...
	I0415 05:17:46.631817   15500 main.go:141] libmachine: Parsing certificate...
	I0415 05:17:46.632358   15500 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:17:46.780364   15500 main.go:141] libmachine: Creating SSH key...
	I0415 05:17:46.877825   15500 main.go:141] libmachine: Creating Disk image...
	I0415 05:17:46.877838   15500 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:17:46.878041   15500 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/bridge-033000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/bridge-033000/disk.qcow2
	I0415 05:17:46.890491   15500 main.go:141] libmachine: STDOUT: 
	I0415 05:17:46.890511   15500 main.go:141] libmachine: STDERR: 
	I0415 05:17:46.890574   15500 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/bridge-033000/disk.qcow2 +20000M
	I0415 05:17:46.901570   15500 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:17:46.901587   15500 main.go:141] libmachine: STDERR: 
	I0415 05:17:46.901607   15500 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/bridge-033000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/bridge-033000/disk.qcow2
	I0415 05:17:46.901615   15500 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:17:46.901657   15500 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/bridge-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/bridge-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/bridge-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:50:d6:5c:6c:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/bridge-033000/disk.qcow2
	I0415 05:17:46.903356   15500 main.go:141] libmachine: STDOUT: 
	I0415 05:17:46.903376   15500 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:17:46.903392   15500 client.go:171] duration metric: took 271.946458ms to LocalClient.Create
	I0415 05:17:48.905455   15500 start.go:128] duration metric: took 2.334431708s to createHost
	I0415 05:17:48.905489   15500 start.go:83] releasing machines lock for "bridge-033000", held for 2.334925875s
	W0415 05:17:48.905731   15500 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:17:48.917176   15500 out.go:177] 
	W0415 05:17:48.920288   15500 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:17:48.920368   15500 out.go:239] * 
	* 
	W0415 05:17:48.922072   15500 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:17:48.934169   15500 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-033000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-033000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.749570416s)

                                                
                                                
-- stdout --
	* [kubenet-033000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-033000" primary control-plane node in "kubenet-033000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-033000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:17:51.173812   15612 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:17:51.173941   15612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:17:51.173945   15612 out.go:304] Setting ErrFile to fd 2...
	I0415 05:17:51.173948   15612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:17:51.174069   15612 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:17:51.175135   15612 out.go:298] Setting JSON to false
	I0415 05:17:51.191078   15612 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8242,"bootTime":1713175229,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:17:51.191145   15612 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:17:51.196045   15612 out.go:177] * [kubenet-033000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:17:51.203947   15612 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:17:51.207982   15612 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:17:51.204009   15612 notify.go:220] Checking for updates...
	I0415 05:17:51.213978   15612 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:17:51.217907   15612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:17:51.220941   15612 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:17:51.223973   15612 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:17:51.227285   15612 config.go:182] Loaded profile config "multinode-671000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:17:51.227351   15612 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:17:51.227399   15612 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:17:51.231920   15612 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 05:17:51.238997   15612 start.go:297] selected driver: qemu2
	I0415 05:17:51.239003   15612 start.go:901] validating driver "qemu2" against <nil>
	I0415 05:17:51.239009   15612 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:17:51.241201   15612 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 05:17:51.243912   15612 out.go:177] * Automatically selected the socket_vmnet network
	I0415 05:17:51.247009   15612 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:17:51.247035   15612 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0415 05:17:51.247070   15612 start.go:340] cluster config:
	{Name:kubenet-033000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kubenet-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:17:51.251332   15612 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:17:51.256958   15612 out.go:177] * Starting "kubenet-033000" primary control-plane node in "kubenet-033000" cluster
	I0415 05:17:51.260929   15612 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:17:51.260950   15612 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 05:17:51.260956   15612 cache.go:56] Caching tarball of preloaded images
	I0415 05:17:51.261003   15612 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:17:51.261007   15612 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:17:51.261061   15612 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/kubenet-033000/config.json ...
	I0415 05:17:51.261072   15612 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/kubenet-033000/config.json: {Name:mkb2d2abfb464e6b4a1e39f750b90b41370762a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:17:51.261286   15612 start.go:360] acquireMachinesLock for kubenet-033000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:17:51.261316   15612 start.go:364] duration metric: took 25µs to acquireMachinesLock for "kubenet-033000"
	I0415 05:17:51.261326   15612 start.go:93] Provisioning new machine with config: &{Name:kubenet-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kubenet-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:17:51.261351   15612 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:17:51.269935   15612 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0415 05:17:51.284913   15612 start.go:159] libmachine.API.Create for "kubenet-033000" (driver="qemu2")
	I0415 05:17:51.284945   15612 client.go:168] LocalClient.Create starting
	I0415 05:17:51.285016   15612 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:17:51.285046   15612 main.go:141] libmachine: Decoding PEM data...
	I0415 05:17:51.285059   15612 main.go:141] libmachine: Parsing certificate...
	I0415 05:17:51.285100   15612 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:17:51.285124   15612 main.go:141] libmachine: Decoding PEM data...
	I0415 05:17:51.285133   15612 main.go:141] libmachine: Parsing certificate...
	I0415 05:17:51.285482   15612 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:17:51.420073   15612 main.go:141] libmachine: Creating SSH key...
	I0415 05:17:51.529068   15612 main.go:141] libmachine: Creating Disk image...
	I0415 05:17:51.529077   15612 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:17:51.529266   15612 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubenet-033000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubenet-033000/disk.qcow2
	I0415 05:17:51.541903   15612 main.go:141] libmachine: STDOUT: 
	I0415 05:17:51.541924   15612 main.go:141] libmachine: STDERR: 
	I0415 05:17:51.541981   15612 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubenet-033000/disk.qcow2 +20000M
	I0415 05:17:51.553232   15612 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:17:51.553255   15612 main.go:141] libmachine: STDERR: 
	I0415 05:17:51.553267   15612 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubenet-033000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubenet-033000/disk.qcow2
	I0415 05:17:51.553271   15612 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:17:51.553302   15612 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubenet-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubenet-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubenet-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:48:47:b9:9d:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubenet-033000/disk.qcow2
	I0415 05:17:51.554907   15612 main.go:141] libmachine: STDOUT: 
	I0415 05:17:51.554936   15612 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:17:51.554954   15612 client.go:171] duration metric: took 270.015709ms to LocalClient.Create
	I0415 05:17:53.557231   15612 start.go:128] duration metric: took 2.295915666s to createHost
	I0415 05:17:53.557385   15612 start.go:83] releasing machines lock for "kubenet-033000", held for 2.296154625s
	W0415 05:17:53.557437   15612 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:17:53.579828   15612 out.go:177] * Deleting "kubenet-033000" in qemu2 ...
	W0415 05:17:53.604510   15612 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:17:53.604556   15612 start.go:728] Will try again in 5 seconds ...
	I0415 05:17:58.606504   15612 start.go:360] acquireMachinesLock for kubenet-033000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:17:58.607021   15612 start.go:364] duration metric: took 429.875µs to acquireMachinesLock for "kubenet-033000"
	I0415 05:17:58.607153   15612 start.go:93] Provisioning new machine with config: &{Name:kubenet-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kubenet-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:17:58.607353   15612 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:17:58.618867   15612 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0415 05:17:58.651162   15612 start.go:159] libmachine.API.Create for "kubenet-033000" (driver="qemu2")
	I0415 05:17:58.651206   15612 client.go:168] LocalClient.Create starting
	I0415 05:17:58.651319   15612 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:17:58.651387   15612 main.go:141] libmachine: Decoding PEM data...
	I0415 05:17:58.651401   15612 main.go:141] libmachine: Parsing certificate...
	I0415 05:17:58.651459   15612 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:17:58.651492   15612 main.go:141] libmachine: Decoding PEM data...
	I0415 05:17:58.651501   15612 main.go:141] libmachine: Parsing certificate...
	I0415 05:17:58.651929   15612 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:17:58.793113   15612 main.go:141] libmachine: Creating SSH key...
	I0415 05:17:58.832575   15612 main.go:141] libmachine: Creating Disk image...
	I0415 05:17:58.832584   15612 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:17:58.832772   15612 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubenet-033000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubenet-033000/disk.qcow2
	I0415 05:17:58.845394   15612 main.go:141] libmachine: STDOUT: 
	I0415 05:17:58.845416   15612 main.go:141] libmachine: STDERR: 
	I0415 05:17:58.845480   15612 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubenet-033000/disk.qcow2 +20000M
	I0415 05:17:58.856802   15612 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:17:58.856820   15612 main.go:141] libmachine: STDERR: 
	I0415 05:17:58.856832   15612 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubenet-033000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubenet-033000/disk.qcow2
	I0415 05:17:58.856836   15612 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:17:58.856875   15612 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubenet-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubenet-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubenet-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:cf:99:61:89:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/kubenet-033000/disk.qcow2
	I0415 05:17:58.858510   15612 main.go:141] libmachine: STDOUT: 
	I0415 05:17:58.858526   15612 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:17:58.858538   15612 client.go:171] duration metric: took 207.336708ms to LocalClient.Create
	I0415 05:18:00.860565   15612 start.go:128] duration metric: took 2.253270125s to createHost
	I0415 05:18:00.860588   15612 start.go:83] releasing machines lock for "kubenet-033000", held for 2.253613875s
	W0415 05:18:00.860745   15612 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:18:00.871061   15612 out.go:177] 
	W0415 05:18:00.875067   15612 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:18:00.875079   15612 out.go:239] * 
	* 
	W0415 05:18:00.875549   15612 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:18:00.883029   15612 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-033000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-033000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.872709208s)

                                                
                                                
-- stdout --
	* [custom-flannel-033000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-033000" primary control-plane node in "custom-flannel-033000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-033000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:18:03.102952   15725 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:18:03.103298   15725 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:18:03.103304   15725 out.go:304] Setting ErrFile to fd 2...
	I0415 05:18:03.103307   15725 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:18:03.103477   15725 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:18:03.104929   15725 out.go:298] Setting JSON to false
	I0415 05:18:03.121434   15725 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8254,"bootTime":1713175229,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:18:03.121504   15725 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:18:03.126604   15725 out.go:177] * [custom-flannel-033000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:18:03.133502   15725 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:18:03.138554   15725 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:18:03.133587   15725 notify.go:220] Checking for updates...
	I0415 05:18:03.144531   15725 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:18:03.148582   15725 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:18:03.151551   15725 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:18:03.154552   15725 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:18:03.157946   15725 config.go:182] Loaded profile config "multinode-671000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:18:03.158015   15725 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:18:03.158058   15725 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:18:03.162498   15725 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 05:18:03.169557   15725 start.go:297] selected driver: qemu2
	I0415 05:18:03.169567   15725 start.go:901] validating driver "qemu2" against <nil>
	I0415 05:18:03.169573   15725 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:18:03.172002   15725 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 05:18:03.174530   15725 out.go:177] * Automatically selected the socket_vmnet network
	I0415 05:18:03.177631   15725 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:18:03.177667   15725 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0415 05:18:03.177682   15725 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0415 05:18:03.177712   15725 start.go:340] cluster config:
	{Name:custom-flannel-033000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:18:03.182778   15725 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:18:03.189544   15725 out.go:177] * Starting "custom-flannel-033000" primary control-plane node in "custom-flannel-033000" cluster
	I0415 05:18:03.193601   15725 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:18:03.193632   15725 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 05:18:03.193640   15725 cache.go:56] Caching tarball of preloaded images
	I0415 05:18:03.193726   15725 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:18:03.193733   15725 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:18:03.193790   15725 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/custom-flannel-033000/config.json ...
	I0415 05:18:03.193802   15725 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/custom-flannel-033000/config.json: {Name:mk8942d2388e6a6ad109fe76c5af3a4f53e547b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:18:03.194133   15725 start.go:360] acquireMachinesLock for custom-flannel-033000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:18:03.194170   15725 start.go:364] duration metric: took 28.291µs to acquireMachinesLock for "custom-flannel-033000"
	I0415 05:18:03.194181   15725 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:18:03.194225   15725 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:18:03.198564   15725 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0415 05:18:03.214518   15725 start.go:159] libmachine.API.Create for "custom-flannel-033000" (driver="qemu2")
	I0415 05:18:03.214562   15725 client.go:168] LocalClient.Create starting
	I0415 05:18:03.214641   15725 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:18:03.214672   15725 main.go:141] libmachine: Decoding PEM data...
	I0415 05:18:03.214683   15725 main.go:141] libmachine: Parsing certificate...
	I0415 05:18:03.214721   15725 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:18:03.214743   15725 main.go:141] libmachine: Decoding PEM data...
	I0415 05:18:03.214751   15725 main.go:141] libmachine: Parsing certificate...
	I0415 05:18:03.215094   15725 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:18:03.354635   15725 main.go:141] libmachine: Creating SSH key...
	I0415 05:18:03.471507   15725 main.go:141] libmachine: Creating Disk image...
	I0415 05:18:03.471517   15725 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:18:03.471733   15725 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/custom-flannel-033000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/custom-flannel-033000/disk.qcow2
	I0415 05:18:03.485580   15725 main.go:141] libmachine: STDOUT: 
	I0415 05:18:03.485602   15725 main.go:141] libmachine: STDERR: 
	I0415 05:18:03.485672   15725 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/custom-flannel-033000/disk.qcow2 +20000M
	I0415 05:18:03.498257   15725 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:18:03.498276   15725 main.go:141] libmachine: STDERR: 
	I0415 05:18:03.498290   15725 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/custom-flannel-033000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/custom-flannel-033000/disk.qcow2
	I0415 05:18:03.498295   15725 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:18:03.498331   15725 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/custom-flannel-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/custom-flannel-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/custom-flannel-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:82:4c:29:0c:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/custom-flannel-033000/disk.qcow2
	I0415 05:18:03.500430   15725 main.go:141] libmachine: STDOUT: 
	I0415 05:18:03.500448   15725 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:18:03.500479   15725 client.go:171] duration metric: took 285.923375ms to LocalClient.Create
	I0415 05:18:05.502586   15725 start.go:128] duration metric: took 2.308433042s to createHost
	I0415 05:18:05.502639   15725 start.go:83] releasing machines lock for "custom-flannel-033000", held for 2.3085575s
	W0415 05:18:05.502683   15725 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:18:05.513228   15725 out.go:177] * Deleting "custom-flannel-033000" in qemu2 ...
	W0415 05:18:05.534871   15725 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:18:05.534896   15725 start.go:728] Will try again in 5 seconds ...
	I0415 05:18:10.536849   15725 start.go:360] acquireMachinesLock for custom-flannel-033000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:18:10.537133   15725 start.go:364] duration metric: took 144.875µs to acquireMachinesLock for "custom-flannel-033000"
	I0415 05:18:10.537186   15725 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:18:10.537327   15725 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:18:10.546059   15725 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0415 05:18:10.577822   15725 start.go:159] libmachine.API.Create for "custom-flannel-033000" (driver="qemu2")
	I0415 05:18:10.577864   15725 client.go:168] LocalClient.Create starting
	I0415 05:18:10.577964   15725 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:18:10.578020   15725 main.go:141] libmachine: Decoding PEM data...
	I0415 05:18:10.578034   15725 main.go:141] libmachine: Parsing certificate...
	I0415 05:18:10.578083   15725 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:18:10.578118   15725 main.go:141] libmachine: Decoding PEM data...
	I0415 05:18:10.578128   15725 main.go:141] libmachine: Parsing certificate...
	I0415 05:18:10.578601   15725 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:18:10.717655   15725 main.go:141] libmachine: Creating SSH key...
	I0415 05:18:10.880731   15725 main.go:141] libmachine: Creating Disk image...
	I0415 05:18:10.880740   15725 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:18:10.880935   15725 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/custom-flannel-033000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/custom-flannel-033000/disk.qcow2
	I0415 05:18:10.893471   15725 main.go:141] libmachine: STDOUT: 
	I0415 05:18:10.893495   15725 main.go:141] libmachine: STDERR: 
	I0415 05:18:10.893549   15725 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/custom-flannel-033000/disk.qcow2 +20000M
	I0415 05:18:10.904370   15725 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:18:10.904386   15725 main.go:141] libmachine: STDERR: 
	I0415 05:18:10.904398   15725 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/custom-flannel-033000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/custom-flannel-033000/disk.qcow2
	I0415 05:18:10.904405   15725 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:18:10.904433   15725 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/custom-flannel-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/custom-flannel-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/custom-flannel-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:97:a7:5f:74:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/custom-flannel-033000/disk.qcow2
	I0415 05:18:10.906072   15725 main.go:141] libmachine: STDOUT: 
	I0415 05:18:10.906097   15725 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:18:10.906109   15725 client.go:171] duration metric: took 328.252959ms to LocalClient.Create
	I0415 05:18:12.908208   15725 start.go:128] duration metric: took 2.370940792s to createHost
	I0415 05:18:12.908285   15725 start.go:83] releasing machines lock for "custom-flannel-033000", held for 2.371235875s
	W0415 05:18:12.908593   15725 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:18:12.920170   15725 out.go:177] 
	W0415 05:18:12.923132   15725 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:18:12.923184   15725 out.go:239] * 
	* 
	W0415 05:18:12.924699   15725 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:18:12.935179   15725 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-033000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-033000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.726628458s)

                                                
                                                
-- stdout --
	* [calico-033000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-033000" primary control-plane node in "calico-033000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-033000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:18:15.382564   15846 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:18:15.382681   15846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:18:15.382683   15846 out.go:304] Setting ErrFile to fd 2...
	I0415 05:18:15.382686   15846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:18:15.382816   15846 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:18:15.383846   15846 out.go:298] Setting JSON to false
	I0415 05:18:15.399946   15846 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8266,"bootTime":1713175229,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:18:15.400017   15846 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:18:15.405668   15846 out.go:177] * [calico-033000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:18:15.413753   15846 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:18:15.417670   15846 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:18:15.413841   15846 notify.go:220] Checking for updates...
	I0415 05:18:15.421891   15846 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:18:15.424712   15846 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:18:15.427774   15846 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:18:15.430764   15846 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:18:15.434125   15846 config.go:182] Loaded profile config "multinode-671000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:18:15.434194   15846 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:18:15.434240   15846 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:18:15.438710   15846 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 05:18:15.445656   15846 start.go:297] selected driver: qemu2
	I0415 05:18:15.445663   15846 start.go:901] validating driver "qemu2" against <nil>
	I0415 05:18:15.445669   15846 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:18:15.448015   15846 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 05:18:15.451757   15846 out.go:177] * Automatically selected the socket_vmnet network
	I0415 05:18:15.455785   15846 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:18:15.455819   15846 cni.go:84] Creating CNI manager for "calico"
	I0415 05:18:15.455823   15846 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0415 05:18:15.455856   15846 start.go:340] cluster config:
	{Name:calico-033000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:calico-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:18:15.460339   15846 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:18:15.468766   15846 out.go:177] * Starting "calico-033000" primary control-plane node in "calico-033000" cluster
	I0415 05:18:15.472731   15846 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:18:15.472755   15846 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 05:18:15.472761   15846 cache.go:56] Caching tarball of preloaded images
	I0415 05:18:15.472820   15846 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:18:15.472826   15846 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:18:15.472881   15846 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/calico-033000/config.json ...
	I0415 05:18:15.472892   15846 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/calico-033000/config.json: {Name:mkd9be675571bf6b7cdf13a65a96bff46bfeb92b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:18:15.473232   15846 start.go:360] acquireMachinesLock for calico-033000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:18:15.473273   15846 start.go:364] duration metric: took 32.834µs to acquireMachinesLock for "calico-033000"
	I0415 05:18:15.473284   15846 start.go:93] Provisioning new machine with config: &{Name:calico-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:calico-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:18:15.473311   15846 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:18:15.480718   15846 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0415 05:18:15.495351   15846 start.go:159] libmachine.API.Create for "calico-033000" (driver="qemu2")
	I0415 05:18:15.495386   15846 client.go:168] LocalClient.Create starting
	I0415 05:18:15.495465   15846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:18:15.495496   15846 main.go:141] libmachine: Decoding PEM data...
	I0415 05:18:15.495508   15846 main.go:141] libmachine: Parsing certificate...
	I0415 05:18:15.495542   15846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:18:15.495564   15846 main.go:141] libmachine: Decoding PEM data...
	I0415 05:18:15.495570   15846 main.go:141] libmachine: Parsing certificate...
	I0415 05:18:15.495917   15846 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:18:15.630622   15846 main.go:141] libmachine: Creating SSH key...
	I0415 05:18:15.668698   15846 main.go:141] libmachine: Creating Disk image...
	I0415 05:18:15.668703   15846 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:18:15.668864   15846 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/calico-033000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/calico-033000/disk.qcow2
	I0415 05:18:15.681100   15846 main.go:141] libmachine: STDOUT: 
	I0415 05:18:15.681124   15846 main.go:141] libmachine: STDERR: 
	I0415 05:18:15.681169   15846 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/calico-033000/disk.qcow2 +20000M
	I0415 05:18:15.691895   15846 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:18:15.691917   15846 main.go:141] libmachine: STDERR: 
	I0415 05:18:15.691939   15846 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/calico-033000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/calico-033000/disk.qcow2
	I0415 05:18:15.691945   15846 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:18:15.691986   15846 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/calico-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/calico-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/calico-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:d9:68:59:93:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/calico-033000/disk.qcow2
	I0415 05:18:15.693570   15846 main.go:141] libmachine: STDOUT: 
	I0415 05:18:15.693584   15846 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:18:15.693603   15846 client.go:171] duration metric: took 198.220875ms to LocalClient.Create
	I0415 05:18:17.695724   15846 start.go:128] duration metric: took 2.222479625s to createHost
	I0415 05:18:17.695810   15846 start.go:83] releasing machines lock for "calico-033000", held for 2.222608625s
	W0415 05:18:17.695903   15846 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:18:17.717438   15846 out.go:177] * Deleting "calico-033000" in qemu2 ...
	W0415 05:18:17.744017   15846 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:18:17.744064   15846 start.go:728] Will try again in 5 seconds ...
	I0415 05:18:22.746032   15846 start.go:360] acquireMachinesLock for calico-033000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:18:22.746503   15846 start.go:364] duration metric: took 387.417µs to acquireMachinesLock for "calico-033000"
	I0415 05:18:22.746628   15846 start.go:93] Provisioning new machine with config: &{Name:calico-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:calico-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:18:22.746874   15846 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:18:22.759498   15846 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0415 05:18:22.796868   15846 start.go:159] libmachine.API.Create for "calico-033000" (driver="qemu2")
	I0415 05:18:22.796909   15846 client.go:168] LocalClient.Create starting
	I0415 05:18:22.797021   15846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:18:22.797080   15846 main.go:141] libmachine: Decoding PEM data...
	I0415 05:18:22.797094   15846 main.go:141] libmachine: Parsing certificate...
	I0415 05:18:22.797161   15846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:18:22.797201   15846 main.go:141] libmachine: Decoding PEM data...
	I0415 05:18:22.797209   15846 main.go:141] libmachine: Parsing certificate...
	I0415 05:18:22.797670   15846 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:18:22.939422   15846 main.go:141] libmachine: Creating SSH key...
	I0415 05:18:23.007365   15846 main.go:141] libmachine: Creating Disk image...
	I0415 05:18:23.007371   15846 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:18:23.007557   15846 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/calico-033000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/calico-033000/disk.qcow2
	I0415 05:18:23.020307   15846 main.go:141] libmachine: STDOUT: 
	I0415 05:18:23.020325   15846 main.go:141] libmachine: STDERR: 
	I0415 05:18:23.020376   15846 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/calico-033000/disk.qcow2 +20000M
	I0415 05:18:23.031473   15846 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:18:23.031489   15846 main.go:141] libmachine: STDERR: 
	I0415 05:18:23.031508   15846 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/calico-033000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/calico-033000/disk.qcow2
	I0415 05:18:23.031513   15846 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:18:23.031543   15846 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/calico-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/calico-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/calico-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:63:6f:d7:68:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/calico-033000/disk.qcow2
	I0415 05:18:23.033258   15846 main.go:141] libmachine: STDOUT: 
	I0415 05:18:23.033273   15846 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:18:23.033287   15846 client.go:171] duration metric: took 236.382625ms to LocalClient.Create
	I0415 05:18:25.035509   15846 start.go:128] duration metric: took 2.288689333s to createHost
	I0415 05:18:25.035596   15846 start.go:83] releasing machines lock for "calico-033000", held for 2.289146417s
	W0415 05:18:25.036045   15846 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:18:25.047705   15846 out.go:177] 
	W0415 05:18:25.053794   15846 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:18:25.053859   15846 out.go:239] * 
	* 
	W0415 05:18:25.056590   15846 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:18:25.065688   15846 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-033000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-033000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.741522792s)

                                                
                                                
-- stdout --
	* [false-033000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-033000" primary control-plane node in "false-033000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-033000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:18:27.575470   15964 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:18:27.575603   15964 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:18:27.575606   15964 out.go:304] Setting ErrFile to fd 2...
	I0415 05:18:27.575608   15964 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:18:27.575716   15964 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:18:27.576753   15964 out.go:298] Setting JSON to false
	I0415 05:18:27.592782   15964 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8278,"bootTime":1713175229,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:18:27.592872   15964 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:18:27.598631   15964 out.go:177] * [false-033000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:18:27.606572   15964 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:18:27.610653   15964 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:18:27.606605   15964 notify.go:220] Checking for updates...
	I0415 05:18:27.616702   15964 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:18:27.619634   15964 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:18:27.622625   15964 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:18:27.625542   15964 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:18:27.628916   15964 config.go:182] Loaded profile config "multinode-671000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:18:27.628980   15964 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:18:27.629021   15964 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:18:27.633652   15964 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 05:18:27.640652   15964 start.go:297] selected driver: qemu2
	I0415 05:18:27.640659   15964 start.go:901] validating driver "qemu2" against <nil>
	I0415 05:18:27.640664   15964 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:18:27.642721   15964 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 05:18:27.645636   15964 out.go:177] * Automatically selected the socket_vmnet network
	I0415 05:18:27.648602   15964 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:18:27.648642   15964 cni.go:84] Creating CNI manager for "false"
	I0415 05:18:27.648666   15964 start.go:340] cluster config:
	{Name:false-033000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:false-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:18:27.652818   15964 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:18:27.659600   15964 out.go:177] * Starting "false-033000" primary control-plane node in "false-033000" cluster
	I0415 05:18:27.663549   15964 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:18:27.663562   15964 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 05:18:27.663568   15964 cache.go:56] Caching tarball of preloaded images
	I0415 05:18:27.663622   15964 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:18:27.663627   15964 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:18:27.663676   15964 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/false-033000/config.json ...
	I0415 05:18:27.663686   15964 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/false-033000/config.json: {Name:mke8fd96c50049859f0a404c2683165f95663a15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:18:27.663905   15964 start.go:360] acquireMachinesLock for false-033000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:18:27.663936   15964 start.go:364] duration metric: took 25.583µs to acquireMachinesLock for "false-033000"
	I0415 05:18:27.663946   15964 start.go:93] Provisioning new machine with config: &{Name:false-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:false-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:18:27.663970   15964 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:18:27.671587   15964 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0415 05:18:27.686454   15964 start.go:159] libmachine.API.Create for "false-033000" (driver="qemu2")
	I0415 05:18:27.686484   15964 client.go:168] LocalClient.Create starting
	I0415 05:18:27.686549   15964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:18:27.686579   15964 main.go:141] libmachine: Decoding PEM data...
	I0415 05:18:27.686590   15964 main.go:141] libmachine: Parsing certificate...
	I0415 05:18:27.686629   15964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:18:27.686651   15964 main.go:141] libmachine: Decoding PEM data...
	I0415 05:18:27.686658   15964 main.go:141] libmachine: Parsing certificate...
	I0415 05:18:27.686975   15964 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:18:27.818967   15964 main.go:141] libmachine: Creating SSH key...
	I0415 05:18:27.858874   15964 main.go:141] libmachine: Creating Disk image...
	I0415 05:18:27.858879   15964 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:18:27.859035   15964 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/false-033000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/false-033000/disk.qcow2
	I0415 05:18:27.871868   15964 main.go:141] libmachine: STDOUT: 
	I0415 05:18:27.871891   15964 main.go:141] libmachine: STDERR: 
	I0415 05:18:27.871950   15964 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/false-033000/disk.qcow2 +20000M
	I0415 05:18:27.883219   15964 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:18:27.883237   15964 main.go:141] libmachine: STDERR: 
	I0415 05:18:27.883257   15964 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/false-033000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/false-033000/disk.qcow2
	I0415 05:18:27.883269   15964 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:18:27.883303   15964 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/false-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/false-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/false-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:a5:73:da:2e:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/false-033000/disk.qcow2
	I0415 05:18:27.885108   15964 main.go:141] libmachine: STDOUT: 
	I0415 05:18:27.885125   15964 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:18:27.885144   15964 client.go:171] duration metric: took 198.662666ms to LocalClient.Create
	I0415 05:18:29.887345   15964 start.go:128] duration metric: took 2.223436625s to createHost
	I0415 05:18:29.887431   15964 start.go:83] releasing machines lock for "false-033000", held for 2.223578583s
	W0415 05:18:29.887490   15964 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:18:29.904546   15964 out.go:177] * Deleting "false-033000" in qemu2 ...
	W0415 05:18:29.930596   15964 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:18:29.930641   15964 start.go:728] Will try again in 5 seconds ...
	I0415 05:18:34.932510   15964 start.go:360] acquireMachinesLock for false-033000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:18:34.932613   15964 start.go:364] duration metric: took 81.958µs to acquireMachinesLock for "false-033000"
	I0415 05:18:34.932641   15964 start.go:93] Provisioning new machine with config: &{Name:false-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:false-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:18:34.932685   15964 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:18:34.937893   15964 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0415 05:18:34.953717   15964 start.go:159] libmachine.API.Create for "false-033000" (driver="qemu2")
	I0415 05:18:34.953741   15964 client.go:168] LocalClient.Create starting
	I0415 05:18:34.953826   15964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:18:34.953864   15964 main.go:141] libmachine: Decoding PEM data...
	I0415 05:18:34.953873   15964 main.go:141] libmachine: Parsing certificate...
	I0415 05:18:34.953919   15964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:18:34.953948   15964 main.go:141] libmachine: Decoding PEM data...
	I0415 05:18:34.953962   15964 main.go:141] libmachine: Parsing certificate...
	I0415 05:18:34.954360   15964 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:18:35.091594   15964 main.go:141] libmachine: Creating SSH key...
	I0415 05:18:35.219404   15964 main.go:141] libmachine: Creating Disk image...
	I0415 05:18:35.219410   15964 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:18:35.219587   15964 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/false-033000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/false-033000/disk.qcow2
	I0415 05:18:35.232178   15964 main.go:141] libmachine: STDOUT: 
	I0415 05:18:35.232205   15964 main.go:141] libmachine: STDERR: 
	I0415 05:18:35.232257   15964 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/false-033000/disk.qcow2 +20000M
	I0415 05:18:35.243266   15964 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:18:35.243291   15964 main.go:141] libmachine: STDERR: 
	I0415 05:18:35.243304   15964 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/false-033000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/false-033000/disk.qcow2
	I0415 05:18:35.243310   15964 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:18:35.243350   15964 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/false-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/false-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/false-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:a0:d8:d4:7b:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/false-033000/disk.qcow2
	I0415 05:18:35.245031   15964 main.go:141] libmachine: STDOUT: 
	I0415 05:18:35.245051   15964 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:18:35.245065   15964 client.go:171] duration metric: took 291.332791ms to LocalClient.Create
	I0415 05:18:37.247068   15964 start.go:128] duration metric: took 2.31446675s to createHost
	I0415 05:18:37.247092   15964 start.go:83] releasing machines lock for "false-033000", held for 2.314569917s
	W0415 05:18:37.247209   15964 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:18:37.258487   15964 out.go:177] 
	W0415 05:18:37.264520   15964 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:18:37.264527   15964 out.go:239] * 
	* 
	W0415 05:18:37.265131   15964 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:18:37.278481   15964 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-533000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-533000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.833977291s)

                                                
                                                
-- stdout --
	* [old-k8s-version-533000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-533000" primary control-plane node in "old-k8s-version-533000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-533000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:18:39.513738   16077 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:18:39.513882   16077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:18:39.513886   16077 out.go:304] Setting ErrFile to fd 2...
	I0415 05:18:39.513892   16077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:18:39.514017   16077 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:18:39.515123   16077 out.go:298] Setting JSON to false
	I0415 05:18:39.531232   16077 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8290,"bootTime":1713175229,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:18:39.531294   16077 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:18:39.541534   16077 out.go:177] * [old-k8s-version-533000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:18:39.549533   16077 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:18:39.552467   16077 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:18:39.549570   16077 notify.go:220] Checking for updates...
	I0415 05:18:39.559370   16077 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:18:39.562463   16077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:18:39.565340   16077 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:18:39.568430   16077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:18:39.571751   16077 config.go:182] Loaded profile config "multinode-671000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:18:39.571823   16077 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:18:39.571866   16077 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:18:39.575321   16077 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 05:18:39.582415   16077 start.go:297] selected driver: qemu2
	I0415 05:18:39.582423   16077 start.go:901] validating driver "qemu2" against <nil>
	I0415 05:18:39.582429   16077 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:18:39.584697   16077 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 05:18:39.586333   16077 out.go:177] * Automatically selected the socket_vmnet network
	I0415 05:18:39.589475   16077 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:18:39.589517   16077 cni.go:84] Creating CNI manager for ""
	I0415 05:18:39.589523   16077 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0415 05:18:39.589565   16077 start.go:340] cluster config:
	{Name:old-k8s-version-533000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-533000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:18:39.593855   16077 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:18:39.601413   16077 out.go:177] * Starting "old-k8s-version-533000" primary control-plane node in "old-k8s-version-533000" cluster
	I0415 05:18:39.605413   16077 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 05:18:39.605432   16077 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0415 05:18:39.605437   16077 cache.go:56] Caching tarball of preloaded images
	I0415 05:18:39.605498   16077 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:18:39.605503   16077 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0415 05:18:39.605552   16077 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/old-k8s-version-533000/config.json ...
	I0415 05:18:39.605564   16077 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/old-k8s-version-533000/config.json: {Name:mk13a4c15f1e1e7e4499db0c679ec920d3348591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:18:39.605776   16077 start.go:360] acquireMachinesLock for old-k8s-version-533000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:18:39.605813   16077 start.go:364] duration metric: took 27.666µs to acquireMachinesLock for "old-k8s-version-533000"
	I0415 05:18:39.605824   16077 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-533000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-533000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:18:39.605849   16077 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:18:39.614449   16077 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 05:18:39.630845   16077 start.go:159] libmachine.API.Create for "old-k8s-version-533000" (driver="qemu2")
	I0415 05:18:39.630871   16077 client.go:168] LocalClient.Create starting
	I0415 05:18:39.630936   16077 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:18:39.630967   16077 main.go:141] libmachine: Decoding PEM data...
	I0415 05:18:39.630977   16077 main.go:141] libmachine: Parsing certificate...
	I0415 05:18:39.631009   16077 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:18:39.631031   16077 main.go:141] libmachine: Decoding PEM data...
	I0415 05:18:39.631038   16077 main.go:141] libmachine: Parsing certificate...
	I0415 05:18:39.631362   16077 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:18:39.763482   16077 main.go:141] libmachine: Creating SSH key...
	I0415 05:18:39.895932   16077 main.go:141] libmachine: Creating Disk image...
	I0415 05:18:39.895940   16077 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:18:39.896122   16077 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/disk.qcow2
	I0415 05:18:39.908769   16077 main.go:141] libmachine: STDOUT: 
	I0415 05:18:39.908796   16077 main.go:141] libmachine: STDERR: 
	I0415 05:18:39.908862   16077 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/disk.qcow2 +20000M
	I0415 05:18:39.920214   16077 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:18:39.920232   16077 main.go:141] libmachine: STDERR: 
	I0415 05:18:39.920257   16077 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/disk.qcow2
	I0415 05:18:39.920262   16077 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:18:39.920298   16077 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:bf:49:7d:fd:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/disk.qcow2
	I0415 05:18:39.922028   16077 main.go:141] libmachine: STDOUT: 
	I0415 05:18:39.922046   16077 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:18:39.922064   16077 client.go:171] duration metric: took 291.200125ms to LocalClient.Create
	I0415 05:18:41.924228   16077 start.go:128] duration metric: took 2.318451791s to createHost
	I0415 05:18:41.924290   16077 start.go:83] releasing machines lock for "old-k8s-version-533000", held for 2.318564417s
	W0415 05:18:41.924344   16077 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:18:41.938482   16077 out.go:177] * Deleting "old-k8s-version-533000" in qemu2 ...
	W0415 05:18:41.965438   16077 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:18:41.965485   16077 start.go:728] Will try again in 5 seconds ...
	I0415 05:18:46.967524   16077 start.go:360] acquireMachinesLock for old-k8s-version-533000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:18:46.968033   16077 start.go:364] duration metric: took 390µs to acquireMachinesLock for "old-k8s-version-533000"
	I0415 05:18:46.968153   16077 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-533000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-533000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:18:46.968380   16077 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:18:46.987070   16077 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 05:18:47.036316   16077 start.go:159] libmachine.API.Create for "old-k8s-version-533000" (driver="qemu2")
	I0415 05:18:47.036365   16077 client.go:168] LocalClient.Create starting
	I0415 05:18:47.036484   16077 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:18:47.036555   16077 main.go:141] libmachine: Decoding PEM data...
	I0415 05:18:47.036574   16077 main.go:141] libmachine: Parsing certificate...
	I0415 05:18:47.036662   16077 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:18:47.036706   16077 main.go:141] libmachine: Decoding PEM data...
	I0415 05:18:47.036718   16077 main.go:141] libmachine: Parsing certificate...
	I0415 05:18:47.037295   16077 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:18:47.180836   16077 main.go:141] libmachine: Creating SSH key...
	I0415 05:18:47.242773   16077 main.go:141] libmachine: Creating Disk image...
	I0415 05:18:47.242784   16077 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:18:47.242961   16077 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/disk.qcow2
	I0415 05:18:47.255728   16077 main.go:141] libmachine: STDOUT: 
	I0415 05:18:47.255748   16077 main.go:141] libmachine: STDERR: 
	I0415 05:18:47.255804   16077 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/disk.qcow2 +20000M
	I0415 05:18:47.267626   16077 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:18:47.267645   16077 main.go:141] libmachine: STDERR: 
	I0415 05:18:47.267658   16077 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/disk.qcow2
	I0415 05:18:47.267663   16077 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:18:47.267706   16077 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:9b:b5:15:9d:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/disk.qcow2
	I0415 05:18:47.269625   16077 main.go:141] libmachine: STDOUT: 
	I0415 05:18:47.269640   16077 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:18:47.269656   16077 client.go:171] duration metric: took 233.295959ms to LocalClient.Create
	I0415 05:18:49.271941   16077 start.go:128] duration metric: took 2.303573125s to createHost
	I0415 05:18:49.272069   16077 start.go:83] releasing machines lock for "old-k8s-version-533000", held for 2.304107625s
	W0415 05:18:49.272446   16077 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-533000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-533000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:18:49.287979   16077 out.go:177] 
	W0415 05:18:49.292140   16077 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:18:49.292178   16077 out.go:239] * 
	* 
	W0415 05:18:49.294803   16077 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:18:49.304085   16077 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-533000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000: exit status 7 (67.418334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-533000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-533000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-533000 create -f testdata/busybox.yaml: exit status 1 (30.146834ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-533000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-533000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000: exit status 7 (29.953084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-533000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000: exit status 7 (30.644166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-533000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-533000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-533000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-533000 describe deploy/metrics-server -n kube-system: exit status 1 (26.915458ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-533000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-533000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000: exit status 7 (30.662583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-533000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-533000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-533000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.18487375s)

                                                
                                                
-- stdout --
	* [old-k8s-version-533000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-533000" primary control-plane node in "old-k8s-version-533000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-533000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-533000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:18:52.563096   16125 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:18:52.563235   16125 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:18:52.563238   16125 out.go:304] Setting ErrFile to fd 2...
	I0415 05:18:52.563241   16125 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:18:52.563386   16125 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:18:52.564438   16125 out.go:298] Setting JSON to false
	I0415 05:18:52.580322   16125 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8303,"bootTime":1713175229,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:18:52.580380   16125 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:18:52.584983   16125 out.go:177] * [old-k8s-version-533000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:18:52.592041   16125 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:18:52.596069   16125 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:18:52.592161   16125 notify.go:220] Checking for updates...
	I0415 05:18:52.602099   16125 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:18:52.605046   16125 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:18:52.608059   16125 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:18:52.611101   16125 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:18:52.614231   16125 config.go:182] Loaded profile config "old-k8s-version-533000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0415 05:18:52.618053   16125 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0415 05:18:52.621035   16125 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:18:52.623928   16125 out.go:177] * Using the qemu2 driver based on existing profile
	I0415 05:18:52.631059   16125 start.go:297] selected driver: qemu2
	I0415 05:18:52.631065   16125 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-533000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-533000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:18:52.631126   16125 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:18:52.633421   16125 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:18:52.633472   16125 cni.go:84] Creating CNI manager for ""
	I0415 05:18:52.633478   16125 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0415 05:18:52.633512   16125 start.go:340] cluster config:
	{Name:old-k8s-version-533000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-533000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:18:52.637671   16125 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:18:52.645067   16125 out.go:177] * Starting "old-k8s-version-533000" primary control-plane node in "old-k8s-version-533000" cluster
	I0415 05:18:52.648871   16125 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 05:18:52.648888   16125 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0415 05:18:52.648892   16125 cache.go:56] Caching tarball of preloaded images
	I0415 05:18:52.648955   16125 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:18:52.648959   16125 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0415 05:18:52.649005   16125 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/old-k8s-version-533000/config.json ...
	I0415 05:18:52.649436   16125 start.go:360] acquireMachinesLock for old-k8s-version-533000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:18:52.649468   16125 start.go:364] duration metric: took 27.084µs to acquireMachinesLock for "old-k8s-version-533000"
	I0415 05:18:52.649476   16125 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:18:52.649482   16125 fix.go:54] fixHost starting: 
	I0415 05:18:52.649589   16125 fix.go:112] recreateIfNeeded on old-k8s-version-533000: state=Stopped err=<nil>
	W0415 05:18:52.649597   16125 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:18:52.654118   16125 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-533000" ...
	I0415 05:18:52.662028   16125 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:9b:b5:15:9d:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/disk.qcow2
	I0415 05:18:52.663951   16125 main.go:141] libmachine: STDOUT: 
	I0415 05:18:52.663968   16125 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:18:52.663994   16125 fix.go:56] duration metric: took 14.513583ms for fixHost
	I0415 05:18:52.663996   16125 start.go:83] releasing machines lock for "old-k8s-version-533000", held for 14.524583ms
	W0415 05:18:52.664002   16125 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:18:52.664032   16125 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:18:52.664037   16125 start.go:728] Will try again in 5 seconds ...
	I0415 05:18:57.666016   16125 start.go:360] acquireMachinesLock for old-k8s-version-533000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:18:57.666259   16125 start.go:364] duration metric: took 184.958µs to acquireMachinesLock for "old-k8s-version-533000"
	I0415 05:18:57.666362   16125 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:18:57.666374   16125 fix.go:54] fixHost starting: 
	I0415 05:18:57.666827   16125 fix.go:112] recreateIfNeeded on old-k8s-version-533000: state=Stopped err=<nil>
	W0415 05:18:57.666845   16125 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:18:57.671348   16125 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-533000" ...
	I0415 05:18:57.680211   16125 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:9b:b5:15:9d:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/old-k8s-version-533000/disk.qcow2
	I0415 05:18:57.687601   16125 main.go:141] libmachine: STDOUT: 
	I0415 05:18:57.687659   16125 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:18:57.687742   16125 fix.go:56] duration metric: took 21.367917ms for fixHost
	I0415 05:18:57.687753   16125 start.go:83] releasing machines lock for "old-k8s-version-533000", held for 21.478875ms
	W0415 05:18:57.687878   16125 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-533000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-533000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:18:57.695207   16125 out.go:177] 
	W0415 05:18:57.699166   16125 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:18:57.699190   16125 out.go:239] * 
	* 
	W0415 05:18:57.700927   16125 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:18:57.710077   16125 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-533000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000: exit status 7 (55.666042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-533000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-533000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000: exit status 7 (31.910125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-533000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-533000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-533000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-533000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.710458ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-533000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-533000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000: exit status 7 (30.139417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-533000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-533000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000: exit status 7 (30.164167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-533000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-533000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-533000 --alsologtostderr -v=1: exit status 83 (40.666958ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-533000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-533000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:18:57.965456   16144 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:18:57.965928   16144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:18:57.965931   16144 out.go:304] Setting ErrFile to fd 2...
	I0415 05:18:57.965936   16144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:18:57.966073   16144 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:18:57.966278   16144 out.go:298] Setting JSON to false
	I0415 05:18:57.966286   16144 mustload.go:65] Loading cluster: old-k8s-version-533000
	I0415 05:18:57.966470   16144 config.go:182] Loaded profile config "old-k8s-version-533000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0415 05:18:57.969773   16144 out.go:177] * The control-plane node old-k8s-version-533000 host is not running: state=Stopped
	I0415 05:18:57.972611   16144 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-533000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-533000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000: exit status 7 (29.797875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-533000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000: exit status 7 (30.533375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-533000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-680000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-680000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-rc.2: exit status 80 (9.918201167s)

                                                
                                                
-- stdout --
	* [no-preload-680000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-680000" primary control-plane node in "no-preload-680000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-680000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:18:58.433975   16167 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:18:58.434112   16167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:18:58.434115   16167 out.go:304] Setting ErrFile to fd 2...
	I0415 05:18:58.434117   16167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:18:58.434248   16167 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:18:58.435407   16167 out.go:298] Setting JSON to false
	I0415 05:18:58.452660   16167 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8309,"bootTime":1713175229,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:18:58.452741   16167 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:18:58.454967   16167 out.go:177] * [no-preload-680000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:18:58.464299   16167 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:18:58.461519   16167 notify.go:220] Checking for updates...
	I0415 05:18:58.471439   16167 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:18:58.479422   16167 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:18:58.486409   16167 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:18:58.492396   16167 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:18:58.499372   16167 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:18:58.503793   16167 config.go:182] Loaded profile config "multinode-671000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:18:58.503862   16167 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:18:58.503910   16167 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:18:58.508343   16167 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 05:18:58.518300   16167 start.go:297] selected driver: qemu2
	I0415 05:18:58.518310   16167 start.go:901] validating driver "qemu2" against <nil>
	I0415 05:18:58.518318   16167 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:18:58.520782   16167 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 05:18:58.524360   16167 out.go:177] * Automatically selected the socket_vmnet network
	I0415 05:18:58.527538   16167 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:18:58.527572   16167 cni.go:84] Creating CNI manager for ""
	I0415 05:18:58.527580   16167 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:18:58.527584   16167 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 05:18:58.527638   16167 start.go:340] cluster config:
	{Name:no-preload-680000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:no-preload-680000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:18:58.532489   16167 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:18:58.540419   16167 out.go:177] * Starting "no-preload-680000" primary control-plane node in "no-preload-680000" cluster
	I0415 05:18:58.544415   16167 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 05:18:58.544506   16167 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/no-preload-680000/config.json ...
	I0415 05:18:58.544534   16167 cache.go:107] acquiring lock: {Name:mk5c74b0ce6049f8d3bd8bc61122954af475ea46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:18:58.544548   16167 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/no-preload-680000/config.json: {Name:mk93aa71f71b44553e74b24309db74852a2f1798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:18:58.544572   16167 cache.go:107] acquiring lock: {Name:mk3d551c74a49dcca463b3aabe366e6ef9b203f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:18:58.544582   16167 cache.go:107] acquiring lock: {Name:mk7a6d09eeee038ad2ad851a029bc6ea4f76123d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:18:58.544603   16167 cache.go:107] acquiring lock: {Name:mkbe8d330125fb028719cd09741fe5169976eccb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:18:58.544657   16167 cache.go:107] acquiring lock: {Name:mk15507dc600dad047b992cbdbaeac23b267e991 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:18:58.544687   16167 cache.go:107] acquiring lock: {Name:mk28e385c9472081a3231875efb2e405114dd24b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:18:58.544687   16167 cache.go:107] acquiring lock: {Name:mk6ac0a4b1093f31760c57b495066eb00c251df1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:18:58.544807   16167 cache.go:115] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0415 05:18:58.544815   16167 cache.go:107] acquiring lock: {Name:mk3ebff883872c38be2f656dabe2b29ce6ad97fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:18:58.544824   16167 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 291.333µs
	I0415 05:18:58.544841   16167 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0415 05:18:58.544855   16167 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0415 05:18:58.544857   16167 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0415 05:18:58.545001   16167 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0415 05:18:58.545004   16167 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0415 05:18:58.544900   16167 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0415 05:18:58.544916   16167 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0415 05:18:58.544968   16167 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0415 05:18:58.544969   16167 start.go:360] acquireMachinesLock for no-preload-680000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:18:58.545196   16167 start.go:364] duration metric: took 25.125µs to acquireMachinesLock for "no-preload-680000"
	I0415 05:18:58.545206   16167 start.go:93] Provisioning new machine with config: &{Name:no-preload-680000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-rc.2 ClusterName:no-preload-680000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:18:58.545238   16167 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:18:58.553321   16167 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 05:18:58.556208   16167 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0415 05:18:58.557154   16167 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0415 05:18:58.557132   16167 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0415 05:18:58.557276   16167 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0415 05:18:58.557168   16167 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0415 05:18:58.559563   16167 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0415 05:18:58.559778   16167 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0415 05:18:58.570249   16167 start.go:159] libmachine.API.Create for "no-preload-680000" (driver="qemu2")
	I0415 05:18:58.570268   16167 client.go:168] LocalClient.Create starting
	I0415 05:18:58.570363   16167 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:18:58.570396   16167 main.go:141] libmachine: Decoding PEM data...
	I0415 05:18:58.570409   16167 main.go:141] libmachine: Parsing certificate...
	I0415 05:18:58.570477   16167 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:18:58.570504   16167 main.go:141] libmachine: Decoding PEM data...
	I0415 05:18:58.570512   16167 main.go:141] libmachine: Parsing certificate...
	I0415 05:18:58.570863   16167 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:18:58.718415   16167 main.go:141] libmachine: Creating SSH key...
	I0415 05:18:58.882218   16167 main.go:141] libmachine: Creating Disk image...
	I0415 05:18:58.882234   16167 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:18:58.882411   16167 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/disk.qcow2
	I0415 05:18:58.894854   16167 main.go:141] libmachine: STDOUT: 
	I0415 05:18:58.894873   16167 main.go:141] libmachine: STDERR: 
	I0415 05:18:58.894917   16167 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/disk.qcow2 +20000M
	I0415 05:18:58.906357   16167 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:18:58.906371   16167 main.go:141] libmachine: STDERR: 
	I0415 05:18:58.906384   16167 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/disk.qcow2
	I0415 05:18:58.906387   16167 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:18:58.906420   16167 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:c1:68:b7:a6:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/disk.qcow2
	I0415 05:18:58.908301   16167 main.go:141] libmachine: STDOUT: 
	I0415 05:18:58.908315   16167 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:18:58.908335   16167 client.go:171] duration metric: took 338.07525ms to LocalClient.Create
	I0415 05:18:58.985158   16167 cache.go:162] opening:  /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2
	I0415 05:18:58.985394   16167 cache.go:162] opening:  /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-rc.2
	I0415 05:18:58.987731   16167 cache.go:162] opening:  /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2
	I0415 05:18:58.990867   16167 cache.go:162] opening:  /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0415 05:18:59.029433   16167 cache.go:162] opening:  /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0415 05:18:59.046345   16167 cache.go:162] opening:  /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2
	I0415 05:18:59.052931   16167 cache.go:162] opening:  /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0415 05:18:59.153900   16167 cache.go:157] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0415 05:18:59.153916   16167 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 609.364042ms
	I0415 05:18:59.153925   16167 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0415 05:19:00.908417   16167 start.go:128] duration metric: took 2.363266625s to createHost
	I0415 05:19:00.908442   16167 start.go:83] releasing machines lock for "no-preload-680000", held for 2.363339958s
	W0415 05:19:00.908459   16167 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:19:00.916503   16167 out.go:177] * Deleting "no-preload-680000" in qemu2 ...
	W0415 05:19:00.928997   16167 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:19:00.929008   16167 start.go:728] Will try again in 5 seconds ...
	I0415 05:19:01.933540   16167 cache.go:157] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2 exists
	I0415 05:19:01.933591   16167 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0-rc.2" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2" took 3.389061583s
	I0415 05:19:01.933611   16167 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0-rc.2 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2 succeeded
	I0415 05:19:02.554282   16167 cache.go:157] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-rc.2 exists
	I0415 05:19:02.554326   16167 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0-rc.2" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-rc.2" took 4.009817666s
	I0415 05:19:02.554348   16167 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0-rc.2 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-rc.2 succeeded
	I0415 05:19:02.719344   16167 cache.go:157] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0415 05:19:02.719366   16167 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.174851333s
	I0415 05:19:02.719379   16167 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0415 05:19:02.795726   16167 cache.go:157] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2 exists
	I0415 05:19:02.795747   16167 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0-rc.2" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2" took 4.251370875s
	I0415 05:19:02.795757   16167 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0-rc.2 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2 succeeded
	I0415 05:19:04.071338   16167 cache.go:157] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2 exists
	I0415 05:19:04.071372   16167 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0-rc.2" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2" took 5.526803208s
	I0415 05:19:04.071391   16167 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0-rc.2 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2 succeeded
	I0415 05:19:05.929054   16167 start.go:360] acquireMachinesLock for no-preload-680000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:19:05.929547   16167 start.go:364] duration metric: took 408.25µs to acquireMachinesLock for "no-preload-680000"
	I0415 05:19:05.929647   16167 start.go:93] Provisioning new machine with config: &{Name:no-preload-680000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-rc.2 ClusterName:no-preload-680000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:19:05.929857   16167 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:19:05.939462   16167 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 05:19:05.987611   16167 start.go:159] libmachine.API.Create for "no-preload-680000" (driver="qemu2")
	I0415 05:19:05.987652   16167 client.go:168] LocalClient.Create starting
	I0415 05:19:05.987789   16167 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:19:05.987863   16167 main.go:141] libmachine: Decoding PEM data...
	I0415 05:19:05.987886   16167 main.go:141] libmachine: Parsing certificate...
	I0415 05:19:05.987980   16167 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:19:05.988025   16167 main.go:141] libmachine: Decoding PEM data...
	I0415 05:19:05.988044   16167 main.go:141] libmachine: Parsing certificate...
	I0415 05:19:05.988602   16167 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:19:06.134008   16167 main.go:141] libmachine: Creating SSH key...
	I0415 05:19:06.244784   16167 main.go:141] libmachine: Creating Disk image...
	I0415 05:19:06.244793   16167 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:19:06.245009   16167 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/disk.qcow2
	I0415 05:19:06.258763   16167 main.go:141] libmachine: STDOUT: 
	I0415 05:19:06.258789   16167 main.go:141] libmachine: STDERR: 
	I0415 05:19:06.258866   16167 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/disk.qcow2 +20000M
	I0415 05:19:06.271611   16167 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:19:06.271642   16167 main.go:141] libmachine: STDERR: 
	I0415 05:19:06.271657   16167 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/disk.qcow2
	I0415 05:19:06.271670   16167 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:19:06.271719   16167 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:25:ec:0e:70:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/disk.qcow2
	I0415 05:19:06.273870   16167 main.go:141] libmachine: STDOUT: 
	I0415 05:19:06.273901   16167 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:19:06.273917   16167 client.go:171] duration metric: took 286.272292ms to LocalClient.Create
	I0415 05:19:06.398774   16167 cache.go:157] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0415 05:19:06.398793   16167 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 7.854548s
	I0415 05:19:06.398808   16167 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0415 05:19:06.398824   16167 cache.go:87] Successfully saved all images to host disk.
	I0415 05:19:08.276077   16167 start.go:128] duration metric: took 2.346260041s to createHost
	I0415 05:19:08.276176   16167 start.go:83] releasing machines lock for "no-preload-680000", held for 2.346704792s
	W0415 05:19:08.276505   16167 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-680000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-680000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:19:08.294218   16167 out.go:177] 
	W0415 05:19:08.298269   16167 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:19:08.298316   16167 out.go:239] * 
	* 
	W0415 05:19:08.301195   16167 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:19:08.309151   16167 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-680000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-680000 -n no-preload-680000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-680000 -n no-preload-680000: exit status 7 (63.447833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-680000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-680000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-680000 create -f testdata/busybox.yaml: exit status 1 (29.422542ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-680000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-680000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-680000 -n no-preload-680000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-680000 -n no-preload-680000: exit status 7 (29.849916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-680000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-680000 -n no-preload-680000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-680000 -n no-preload-680000: exit status 7 (31.399417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-680000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-680000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-680000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-680000 describe deploy/metrics-server -n kube-system: exit status 1 (26.610375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-680000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-680000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-680000 -n no-preload-680000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-680000 -n no-preload-680000: exit status 7 (30.090041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-680000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-680000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-680000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-rc.2: exit status 80 (5.190144458s)

                                                
                                                
-- stdout --
	* [no-preload-680000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-680000" primary control-plane node in "no-preload-680000" cluster
	* Restarting existing qemu2 VM for "no-preload-680000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-680000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:19:12.012216   16244 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:19:12.012339   16244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:19:12.012343   16244 out.go:304] Setting ErrFile to fd 2...
	I0415 05:19:12.012345   16244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:19:12.012490   16244 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:19:12.013545   16244 out.go:298] Setting JSON to false
	I0415 05:19:12.029529   16244 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8323,"bootTime":1713175229,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:19:12.029587   16244 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:19:12.033161   16244 out.go:177] * [no-preload-680000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:19:12.044102   16244 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:19:12.039119   16244 notify.go:220] Checking for updates...
	I0415 05:19:12.051014   16244 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:19:12.054106   16244 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:19:12.056964   16244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:19:12.060062   16244 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:19:12.063053   16244 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:19:12.064628   16244 config.go:182] Loaded profile config "no-preload-680000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-rc.2
	I0415 05:19:12.064889   16244 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:19:12.069069   16244 out.go:177] * Using the qemu2 driver based on existing profile
	I0415 05:19:12.075931   16244 start.go:297] selected driver: qemu2
	I0415 05:19:12.075937   16244 start.go:901] validating driver "qemu2" against &{Name:no-preload-680000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0-rc.2 ClusterName:no-preload-680000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:19:12.075989   16244 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:19:12.078221   16244 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:19:12.078260   16244 cni.go:84] Creating CNI manager for ""
	I0415 05:19:12.078267   16244 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:19:12.078295   16244 start.go:340] cluster config:
	{Name:no-preload-680000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:no-preload-680000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:19:12.082544   16244 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:19:12.090110   16244 out.go:177] * Starting "no-preload-680000" primary control-plane node in "no-preload-680000" cluster
	I0415 05:19:12.094035   16244 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 05:19:12.094088   16244 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/no-preload-680000/config.json ...
	I0415 05:19:12.094101   16244 cache.go:107] acquiring lock: {Name:mk5c74b0ce6049f8d3bd8bc61122954af475ea46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:19:12.094102   16244 cache.go:107] acquiring lock: {Name:mk3ebff883872c38be2f656dabe2b29ce6ad97fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:19:12.094126   16244 cache.go:107] acquiring lock: {Name:mk15507dc600dad047b992cbdbaeac23b267e991 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:19:12.094155   16244 cache.go:115] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0415 05:19:12.094157   16244 cache.go:115] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2 exists
	I0415 05:19:12.094159   16244 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 60.041µs
	I0415 05:19:12.094161   16244 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0-rc.2" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2" took 64.083µs
	I0415 05:19:12.094165   16244 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0415 05:19:12.094165   16244 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0-rc.2 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2 succeeded
	I0415 05:19:12.094170   16244 cache.go:107] acquiring lock: {Name:mk6ac0a4b1093f31760c57b495066eb00c251df1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:19:12.094176   16244 cache.go:107] acquiring lock: {Name:mk3d551c74a49dcca463b3aabe366e6ef9b203f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:19:12.094180   16244 cache.go:107] acquiring lock: {Name:mk28e385c9472081a3231875efb2e405114dd24b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:19:12.094206   16244 cache.go:115] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-rc.2 exists
	I0415 05:19:12.094215   16244 cache.go:115] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0415 05:19:12.094218   16244 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 39.041µs
	I0415 05:19:12.094221   16244 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0415 05:19:12.094213   16244 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0-rc.2" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-rc.2" took 43.292µs
	I0415 05:19:12.094226   16244 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0-rc.2 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-rc.2 succeeded
	I0415 05:19:12.094222   16244 cache.go:107] acquiring lock: {Name:mk7a6d09eeee038ad2ad851a029bc6ea4f76123d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:19:12.094232   16244 cache.go:115] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2 exists
	I0415 05:19:12.094237   16244 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0-rc.2" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2" took 127.667µs
	I0415 05:19:12.094240   16244 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0-rc.2 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2 succeeded
	I0415 05:19:12.094243   16244 cache.go:107] acquiring lock: {Name:mkbe8d330125fb028719cd09741fe5169976eccb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:19:12.094265   16244 cache.go:115] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2 exists
	I0415 05:19:12.094269   16244 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0-rc.2" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2" took 48.208µs
	I0415 05:19:12.094273   16244 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0-rc.2 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2 succeeded
	I0415 05:19:12.094285   16244 cache.go:115] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0415 05:19:12.094289   16244 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 57.917µs
	I0415 05:19:12.094293   16244 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0415 05:19:12.094343   16244 cache.go:115] /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0415 05:19:12.094347   16244 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 171.5µs
	I0415 05:19:12.094350   16244 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0415 05:19:12.094354   16244 cache.go:87] Successfully saved all images to host disk.
	I0415 05:19:12.094470   16244 start.go:360] acquireMachinesLock for no-preload-680000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:19:12.094496   16244 start.go:364] duration metric: took 20.208µs to acquireMachinesLock for "no-preload-680000"
	I0415 05:19:12.094503   16244 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:19:12.094509   16244 fix.go:54] fixHost starting: 
	I0415 05:19:12.094620   16244 fix.go:112] recreateIfNeeded on no-preload-680000: state=Stopped err=<nil>
	W0415 05:19:12.094628   16244 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:19:12.101029   16244 out.go:177] * Restarting existing qemu2 VM for "no-preload-680000" ...
	I0415 05:19:12.105117   16244 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:25:ec:0e:70:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/disk.qcow2
	I0415 05:19:12.107106   16244 main.go:141] libmachine: STDOUT: 
	I0415 05:19:12.107122   16244 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:19:12.107149   16244 fix.go:56] duration metric: took 12.640292ms for fixHost
	I0415 05:19:12.107152   16244 start.go:83] releasing machines lock for "no-preload-680000", held for 12.653375ms
	W0415 05:19:12.107158   16244 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:19:12.107184   16244 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:19:12.107189   16244 start.go:728] Will try again in 5 seconds ...
	I0415 05:19:17.109167   16244 start.go:360] acquireMachinesLock for no-preload-680000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:19:17.109738   16244 start.go:364] duration metric: took 423.375µs to acquireMachinesLock for "no-preload-680000"
	I0415 05:19:17.109869   16244 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:19:17.109892   16244 fix.go:54] fixHost starting: 
	I0415 05:19:17.110668   16244 fix.go:112] recreateIfNeeded on no-preload-680000: state=Stopped err=<nil>
	W0415 05:19:17.110698   16244 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:19:17.116194   16244 out.go:177] * Restarting existing qemu2 VM for "no-preload-680000" ...
	I0415 05:19:17.127348   16244 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:25:ec:0e:70:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/no-preload-680000/disk.qcow2
	I0415 05:19:17.136766   16244 main.go:141] libmachine: STDOUT: 
	I0415 05:19:17.136835   16244 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:19:17.136921   16244 fix.go:56] duration metric: took 27.032291ms for fixHost
	I0415 05:19:17.136940   16244 start.go:83] releasing machines lock for "no-preload-680000", held for 27.162542ms
	W0415 05:19:17.137153   16244 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-680000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-680000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:19:17.145129   16244 out.go:177] 
	W0415 05:19:17.149176   16244 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:19:17.149210   16244 out.go:239] * 
	* 
	W0415 05:19:17.151115   16244 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:19:17.159192   16244 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-680000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-680000 -n no-preload-680000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-680000 -n no-preload-680000: exit status 7 (63.344083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-680000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-680000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-680000 -n no-preload-680000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-680000 -n no-preload-680000: exit status 7 (34.253625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-680000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-680000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-680000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-680000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.643375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-680000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-680000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-680000 -n no-preload-680000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-680000 -n no-preload-680000: exit status 7 (31.247083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-680000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-680000 image list --format=json
start_stop_delete_test.go:304: v1.30.0-rc.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.30.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.30.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-680000 -n no-preload-680000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-680000 -n no-preload-680000: exit status 7 (30.008917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-680000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-680000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-680000 --alsologtostderr -v=1: exit status 83 (42.612417ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-680000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-680000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:19:17.431408   16263 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:19:17.431552   16263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:19:17.431555   16263 out.go:304] Setting ErrFile to fd 2...
	I0415 05:19:17.431557   16263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:19:17.431679   16263 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:19:17.431896   16263 out.go:298] Setting JSON to false
	I0415 05:19:17.431904   16263 mustload.go:65] Loading cluster: no-preload-680000
	I0415 05:19:17.432095   16263 config.go:182] Loaded profile config "no-preload-680000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-rc.2
	I0415 05:19:17.436797   16263 out.go:177] * The control-plane node no-preload-680000 host is not running: state=Stopped
	I0415 05:19:17.440707   16263 out.go:177]   To start a cluster, run: "minikube start -p no-preload-680000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-680000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-680000 -n no-preload-680000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-680000 -n no-preload-680000: exit status 7 (31.176208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-680000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-680000 -n no-preload-680000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-680000 -n no-preload-680000: exit status 7 (31.548542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-680000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-188000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-188000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (9.825732333s)

                                                
                                                
-- stdout --
	* [embed-certs-188000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-188000" primary control-plane node in "embed-certs-188000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-188000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:19:17.889032   16286 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:19:17.889191   16286 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:19:17.889195   16286 out.go:304] Setting ErrFile to fd 2...
	I0415 05:19:17.889198   16286 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:19:17.889318   16286 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:19:17.890434   16286 out.go:298] Setting JSON to false
	I0415 05:19:17.906527   16286 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8328,"bootTime":1713175229,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:19:17.906593   16286 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:19:17.911261   16286 out.go:177] * [embed-certs-188000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:19:17.917142   16286 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:19:17.920221   16286 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:19:17.917215   16286 notify.go:220] Checking for updates...
	I0415 05:19:17.926189   16286 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:19:17.929156   16286 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:19:17.932083   16286 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:19:17.935123   16286 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:19:17.938508   16286 config.go:182] Loaded profile config "multinode-671000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:19:17.938566   16286 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:19:17.938608   16286 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:19:17.942110   16286 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 05:19:17.949121   16286 start.go:297] selected driver: qemu2
	I0415 05:19:17.949128   16286 start.go:901] validating driver "qemu2" against <nil>
	I0415 05:19:17.949133   16286 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:19:17.951274   16286 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 05:19:17.952449   16286 out.go:177] * Automatically selected the socket_vmnet network
	I0415 05:19:17.955182   16286 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:19:17.955209   16286 cni.go:84] Creating CNI manager for ""
	I0415 05:19:17.955216   16286 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:19:17.955225   16286 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 05:19:17.955246   16286 start.go:340] cluster config:
	{Name:embed-certs-188000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-188000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:19:17.959382   16286 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:19:17.966070   16286 out.go:177] * Starting "embed-certs-188000" primary control-plane node in "embed-certs-188000" cluster
	I0415 05:19:17.970098   16286 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:19:17.970128   16286 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 05:19:17.970132   16286 cache.go:56] Caching tarball of preloaded images
	I0415 05:19:17.970200   16286 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:19:17.970205   16286 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:19:17.970254   16286 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/embed-certs-188000/config.json ...
	I0415 05:19:17.970269   16286 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/embed-certs-188000/config.json: {Name:mkadf3a7687fdeb903cd6b96b75173318d15c615 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:19:17.970513   16286 start.go:360] acquireMachinesLock for embed-certs-188000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:19:17.970544   16286 start.go:364] duration metric: took 24.084µs to acquireMachinesLock for "embed-certs-188000"
	I0415 05:19:17.970554   16286 start.go:93] Provisioning new machine with config: &{Name:embed-certs-188000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:embed-certs-188000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:19:17.970589   16286 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:19:17.978124   16286 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 05:19:17.992864   16286 start.go:159] libmachine.API.Create for "embed-certs-188000" (driver="qemu2")
	I0415 05:19:17.992883   16286 client.go:168] LocalClient.Create starting
	I0415 05:19:17.992949   16286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:19:17.992978   16286 main.go:141] libmachine: Decoding PEM data...
	I0415 05:19:17.992988   16286 main.go:141] libmachine: Parsing certificate...
	I0415 05:19:17.993021   16286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:19:17.993047   16286 main.go:141] libmachine: Decoding PEM data...
	I0415 05:19:17.993057   16286 main.go:141] libmachine: Parsing certificate...
	I0415 05:19:17.993456   16286 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:19:18.128297   16286 main.go:141] libmachine: Creating SSH key...
	I0415 05:19:18.196058   16286 main.go:141] libmachine: Creating Disk image...
	I0415 05:19:18.196065   16286 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:19:18.196226   16286 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/disk.qcow2
	I0415 05:19:18.208890   16286 main.go:141] libmachine: STDOUT: 
	I0415 05:19:18.208909   16286 main.go:141] libmachine: STDERR: 
	I0415 05:19:18.208965   16286 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/disk.qcow2 +20000M
	I0415 05:19:18.220275   16286 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:19:18.220294   16286 main.go:141] libmachine: STDERR: 
	I0415 05:19:18.220326   16286 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/disk.qcow2
	I0415 05:19:18.220332   16286 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:19:18.220369   16286 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:1e:3a:29:2e:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/disk.qcow2
	I0415 05:19:18.222143   16286 main.go:141] libmachine: STDOUT: 
	I0415 05:19:18.222161   16286 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:19:18.222177   16286 client.go:171] duration metric: took 229.300417ms to LocalClient.Create
	I0415 05:19:20.224155   16286 start.go:128] duration metric: took 2.253652083s to createHost
	I0415 05:19:20.224179   16286 start.go:83] releasing machines lock for "embed-certs-188000", held for 2.253724042s
	W0415 05:19:20.224204   16286 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:19:20.232824   16286 out.go:177] * Deleting "embed-certs-188000" in qemu2 ...
	W0415 05:19:20.245399   16286 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:19:20.245407   16286 start.go:728] Will try again in 5 seconds ...
	I0415 05:19:25.247418   16286 start.go:360] acquireMachinesLock for embed-certs-188000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:19:25.247834   16286 start.go:364] duration metric: took 335.084µs to acquireMachinesLock for "embed-certs-188000"
	I0415 05:19:25.247929   16286 start.go:93] Provisioning new machine with config: &{Name:embed-certs-188000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:embed-certs-188000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:19:25.248243   16286 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:19:25.257866   16286 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 05:19:25.307106   16286 start.go:159] libmachine.API.Create for "embed-certs-188000" (driver="qemu2")
	I0415 05:19:25.307167   16286 client.go:168] LocalClient.Create starting
	I0415 05:19:25.307292   16286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:19:25.307358   16286 main.go:141] libmachine: Decoding PEM data...
	I0415 05:19:25.307375   16286 main.go:141] libmachine: Parsing certificate...
	I0415 05:19:25.307436   16286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:19:25.307481   16286 main.go:141] libmachine: Decoding PEM data...
	I0415 05:19:25.307495   16286 main.go:141] libmachine: Parsing certificate...
	I0415 05:19:25.308487   16286 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:19:25.454727   16286 main.go:141] libmachine: Creating SSH key...
	I0415 05:19:25.621269   16286 main.go:141] libmachine: Creating Disk image...
	I0415 05:19:25.621279   16286 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:19:25.621509   16286 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/disk.qcow2
	I0415 05:19:25.634836   16286 main.go:141] libmachine: STDOUT: 
	I0415 05:19:25.634856   16286 main.go:141] libmachine: STDERR: 
	I0415 05:19:25.634920   16286 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/disk.qcow2 +20000M
	I0415 05:19:25.646015   16286 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:19:25.646033   16286 main.go:141] libmachine: STDERR: 
	I0415 05:19:25.646045   16286 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/disk.qcow2
	I0415 05:19:25.646051   16286 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:19:25.646091   16286 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:27:b6:24:bc:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/disk.qcow2
	I0415 05:19:25.647787   16286 main.go:141] libmachine: STDOUT: 
	I0415 05:19:25.647802   16286 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:19:25.647815   16286 client.go:171] duration metric: took 340.657042ms to LocalClient.Create
	I0415 05:19:27.649854   16286 start.go:128] duration metric: took 2.401688458s to createHost
	I0415 05:19:27.649880   16286 start.go:83] releasing machines lock for "embed-certs-188000", held for 2.402126291s
	W0415 05:19:27.649995   16286 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-188000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-188000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:19:27.661309   16286 out.go:177] 
	W0415 05:19:27.664288   16286 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:19:27.664303   16286 out.go:239] * 
	* 
	W0415 05:19:27.664737   16286 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:19:27.677237   16286 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-188000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-188000 -n embed-certs-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-188000 -n embed-certs-188000: exit status 7 (33.195708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-188000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-188000 create -f testdata/busybox.yaml: exit status 1 (28.23825ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-188000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-188000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-188000 -n embed-certs-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-188000 -n embed-certs-188000: exit status 7 (31.829834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-188000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-188000 -n embed-certs-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-188000 -n embed-certs-188000: exit status 7 (30.52075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-188000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-188000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-188000 describe deploy/metrics-server -n kube-system: exit status 1 (26.781125ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-188000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-188000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-188000 -n embed-certs-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-188000 -n embed-certs-188000: exit status 7 (30.759583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-188000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-188000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (5.204652209s)

                                                
                                                
-- stdout --
	* [embed-certs-188000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-188000" primary control-plane node in "embed-certs-188000" cluster
	* Restarting existing qemu2 VM for "embed-certs-188000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-188000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:19:30.137358   16332 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:19:30.137509   16332 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:19:30.137513   16332 out.go:304] Setting ErrFile to fd 2...
	I0415 05:19:30.137516   16332 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:19:30.137649   16332 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:19:30.138728   16332 out.go:298] Setting JSON to false
	I0415 05:19:30.156007   16332 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8341,"bootTime":1713175229,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:19:30.156086   16332 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:19:30.160472   16332 out.go:177] * [embed-certs-188000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:19:30.161671   16332 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:19:30.164400   16332 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:19:30.161695   16332 notify.go:220] Checking for updates...
	I0415 05:19:30.170384   16332 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:19:30.177334   16332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:19:30.183350   16332 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:19:30.191329   16332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:19:30.198634   16332 config.go:182] Loaded profile config "embed-certs-188000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:19:30.198903   16332 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:19:30.206345   16332 out.go:177] * Using the qemu2 driver based on existing profile
	I0415 05:19:30.213376   16332 start.go:297] selected driver: qemu2
	I0415 05:19:30.213387   16332 start.go:901] validating driver "qemu2" against &{Name:embed-certs-188000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:embed-certs-188000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:19:30.213444   16332 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:19:30.215859   16332 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:19:30.215906   16332 cni.go:84] Creating CNI manager for ""
	I0415 05:19:30.215912   16332 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:19:30.215930   16332 start.go:340] cluster config:
	{Name:embed-certs-188000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-188000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:19:30.220240   16332 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:19:30.228369   16332 out.go:177] * Starting "embed-certs-188000" primary control-plane node in "embed-certs-188000" cluster
	I0415 05:19:30.236382   16332 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:19:30.236404   16332 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 05:19:30.236413   16332 cache.go:56] Caching tarball of preloaded images
	I0415 05:19:30.236465   16332 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:19:30.236470   16332 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:19:30.236530   16332 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/embed-certs-188000/config.json ...
	I0415 05:19:30.240439   16332 start.go:360] acquireMachinesLock for embed-certs-188000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:19:30.240469   16332 start.go:364] duration metric: took 22.834µs to acquireMachinesLock for "embed-certs-188000"
	I0415 05:19:30.240480   16332 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:19:30.240486   16332 fix.go:54] fixHost starting: 
	I0415 05:19:30.240621   16332 fix.go:112] recreateIfNeeded on embed-certs-188000: state=Stopped err=<nil>
	W0415 05:19:30.240630   16332 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:19:30.244312   16332 out.go:177] * Restarting existing qemu2 VM for "embed-certs-188000" ...
	I0415 05:19:30.252357   16332 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:27:b6:24:bc:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/disk.qcow2
	I0415 05:19:30.254466   16332 main.go:141] libmachine: STDOUT: 
	I0415 05:19:30.254483   16332 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:19:30.254511   16332 fix.go:56] duration metric: took 14.025334ms for fixHost
	I0415 05:19:30.254516   16332 start.go:83] releasing machines lock for "embed-certs-188000", held for 14.040041ms
	W0415 05:19:30.254524   16332 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:19:30.254561   16332 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:19:30.254565   16332 start.go:728] Will try again in 5 seconds ...
	I0415 05:19:35.256519   16332 start.go:360] acquireMachinesLock for embed-certs-188000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:19:35.256893   16332 start.go:364] duration metric: took 292.75µs to acquireMachinesLock for "embed-certs-188000"
	I0415 05:19:35.256992   16332 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:19:35.257010   16332 fix.go:54] fixHost starting: 
	I0415 05:19:35.257648   16332 fix.go:112] recreateIfNeeded on embed-certs-188000: state=Stopped err=<nil>
	W0415 05:19:35.257667   16332 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:19:35.262322   16332 out.go:177] * Restarting existing qemu2 VM for "embed-certs-188000" ...
	I0415 05:19:35.269263   16332 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:27:b6:24:bc:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/embed-certs-188000/disk.qcow2
	I0415 05:19:35.277179   16332 main.go:141] libmachine: STDOUT: 
	I0415 05:19:35.277246   16332 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:19:35.277308   16332 fix.go:56] duration metric: took 20.301167ms for fixHost
	I0415 05:19:35.277325   16332 start.go:83] releasing machines lock for "embed-certs-188000", held for 20.413917ms
	W0415 05:19:35.277501   16332 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-188000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-188000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:19:35.285121   16332 out.go:177] 
	W0415 05:19:35.289268   16332 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:19:35.289298   16332 out.go:239] * 
	* 
	W0415 05:19:35.290722   16332 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:19:35.298125   16332 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-188000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-188000 -n embed-certs-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-188000 -n embed-certs-188000: exit status 7 (56.575792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-188000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-188000 -n embed-certs-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-188000 -n embed-certs-188000: exit status 7 (32.001083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-188000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-188000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-188000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.324708ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-188000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-188000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-188000 -n embed-certs-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-188000 -n embed-certs-188000: exit status 7 (30.490417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-188000 image list --format=json
start_stop_delete_test.go:304: v1.29.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.3",
- 	"registry.k8s.io/kube-controller-manager:v1.29.3",
- 	"registry.k8s.io/kube-proxy:v1.29.3",
- 	"registry.k8s.io/kube-scheduler:v1.29.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-188000 -n embed-certs-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-188000 -n embed-certs-188000: exit status 7 (30.045625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-188000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-188000 --alsologtostderr -v=1: exit status 83 (41.890125ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-188000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-188000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:19:35.558659   16368 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:19:35.558812   16368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:19:35.558815   16368 out.go:304] Setting ErrFile to fd 2...
	I0415 05:19:35.558818   16368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:19:35.558963   16368 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:19:35.559201   16368 out.go:298] Setting JSON to false
	I0415 05:19:35.559209   16368 mustload.go:65] Loading cluster: embed-certs-188000
	I0415 05:19:35.559402   16368 config.go:182] Loaded profile config "embed-certs-188000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:19:35.563044   16368 out.go:177] * The control-plane node embed-certs-188000 host is not running: state=Stopped
	I0415 05:19:35.567174   16368 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-188000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-188000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-188000 -n embed-certs-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-188000 -n embed-certs-188000: exit status 7 (29.830209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-188000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-188000 -n embed-certs-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-188000 -n embed-certs-188000: exit status 7 (30.228917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-087000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-087000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (9.755057s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-087000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-087000" primary control-plane node in "default-k8s-diff-port-087000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-087000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:19:36.236286   16403 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:19:36.236405   16403 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:19:36.236408   16403 out.go:304] Setting ErrFile to fd 2...
	I0415 05:19:36.236410   16403 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:19:36.236539   16403 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:19:36.237664   16403 out.go:298] Setting JSON to false
	I0415 05:19:36.253728   16403 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8347,"bootTime":1713175229,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:19:36.253794   16403 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:19:36.259063   16403 out.go:177] * [default-k8s-diff-port-087000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:19:36.266002   16403 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:19:36.269987   16403 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:19:36.266042   16403 notify.go:220] Checking for updates...
	I0415 05:19:36.276005   16403 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:19:36.279029   16403 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:19:36.282022   16403 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:19:36.284995   16403 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:19:36.288299   16403 config.go:182] Loaded profile config "multinode-671000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:19:36.288362   16403 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:19:36.288404   16403 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:19:36.292921   16403 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 05:19:36.300001   16403 start.go:297] selected driver: qemu2
	I0415 05:19:36.300008   16403 start.go:901] validating driver "qemu2" against <nil>
	I0415 05:19:36.300014   16403 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:19:36.302270   16403 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 05:19:36.304888   16403 out.go:177] * Automatically selected the socket_vmnet network
	I0415 05:19:36.308071   16403 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:19:36.308101   16403 cni.go:84] Creating CNI manager for ""
	I0415 05:19:36.308109   16403 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:19:36.308118   16403 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 05:19:36.308145   16403 start.go:340] cluster config:
	{Name:default-k8s-diff-port-087000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-087000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:19:36.312531   16403 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:19:36.319964   16403 out.go:177] * Starting "default-k8s-diff-port-087000" primary control-plane node in "default-k8s-diff-port-087000" cluster
	I0415 05:19:36.323937   16403 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:19:36.323959   16403 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 05:19:36.323967   16403 cache.go:56] Caching tarball of preloaded images
	I0415 05:19:36.324024   16403 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:19:36.324029   16403 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:19:36.324086   16403 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/default-k8s-diff-port-087000/config.json ...
	I0415 05:19:36.324098   16403 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/default-k8s-diff-port-087000/config.json: {Name:mk92002902dc251295be0d2f00b262b55734ad82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:19:36.324323   16403 start.go:360] acquireMachinesLock for default-k8s-diff-port-087000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:19:36.324360   16403 start.go:364] duration metric: took 28.542µs to acquireMachinesLock for "default-k8s-diff-port-087000"
	I0415 05:19:36.324371   16403 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-087000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-087000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:19:36.324397   16403 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:19:36.332025   16403 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 05:19:36.349084   16403 start.go:159] libmachine.API.Create for "default-k8s-diff-port-087000" (driver="qemu2")
	I0415 05:19:36.349110   16403 client.go:168] LocalClient.Create starting
	I0415 05:19:36.349174   16403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:19:36.349206   16403 main.go:141] libmachine: Decoding PEM data...
	I0415 05:19:36.349215   16403 main.go:141] libmachine: Parsing certificate...
	I0415 05:19:36.349250   16403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:19:36.349273   16403 main.go:141] libmachine: Decoding PEM data...
	I0415 05:19:36.349279   16403 main.go:141] libmachine: Parsing certificate...
	I0415 05:19:36.349601   16403 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:19:36.481816   16403 main.go:141] libmachine: Creating SSH key...
	I0415 05:19:36.569237   16403 main.go:141] libmachine: Creating Disk image...
	I0415 05:19:36.569243   16403 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:19:36.569405   16403 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/disk.qcow2
	I0415 05:19:36.582005   16403 main.go:141] libmachine: STDOUT: 
	I0415 05:19:36.582028   16403 main.go:141] libmachine: STDERR: 
	I0415 05:19:36.582083   16403 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/disk.qcow2 +20000M
	I0415 05:19:36.593647   16403 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:19:36.593673   16403 main.go:141] libmachine: STDERR: 
	I0415 05:19:36.593690   16403 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/disk.qcow2
	I0415 05:19:36.593695   16403 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:19:36.593721   16403 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:0a:e8:1e:78:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/disk.qcow2
	I0415 05:19:36.595503   16403 main.go:141] libmachine: STDOUT: 
	I0415 05:19:36.595518   16403 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:19:36.595536   16403 client.go:171] duration metric: took 246.431708ms to LocalClient.Create
	I0415 05:19:38.597644   16403 start.go:128] duration metric: took 2.273314666s to createHost
	I0415 05:19:38.597749   16403 start.go:83] releasing machines lock for "default-k8s-diff-port-087000", held for 2.273475459s
	W0415 05:19:38.597798   16403 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:19:38.613602   16403 out.go:177] * Deleting "default-k8s-diff-port-087000" in qemu2 ...
	W0415 05:19:38.629485   16403 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:19:38.629504   16403 start.go:728] Will try again in 5 seconds ...
	I0415 05:19:43.631453   16403 start.go:360] acquireMachinesLock for default-k8s-diff-port-087000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:19:43.631714   16403 start.go:364] duration metric: took 207.75µs to acquireMachinesLock for "default-k8s-diff-port-087000"
	I0415 05:19:43.631779   16403 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-087000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-087000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:19:43.631899   16403 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:19:43.650232   16403 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 05:19:43.689963   16403 start.go:159] libmachine.API.Create for "default-k8s-diff-port-087000" (driver="qemu2")
	I0415 05:19:43.690014   16403 client.go:168] LocalClient.Create starting
	I0415 05:19:43.690132   16403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:19:43.690189   16403 main.go:141] libmachine: Decoding PEM data...
	I0415 05:19:43.690205   16403 main.go:141] libmachine: Parsing certificate...
	I0415 05:19:43.690263   16403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:19:43.690302   16403 main.go:141] libmachine: Decoding PEM data...
	I0415 05:19:43.690311   16403 main.go:141] libmachine: Parsing certificate...
	I0415 05:19:43.690777   16403 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:19:43.832085   16403 main.go:141] libmachine: Creating SSH key...
	I0415 05:19:43.888096   16403 main.go:141] libmachine: Creating Disk image...
	I0415 05:19:43.888104   16403 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:19:43.888276   16403 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/disk.qcow2
	I0415 05:19:43.900624   16403 main.go:141] libmachine: STDOUT: 
	I0415 05:19:43.900648   16403 main.go:141] libmachine: STDERR: 
	I0415 05:19:43.900701   16403 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/disk.qcow2 +20000M
	I0415 05:19:43.912685   16403 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:19:43.912708   16403 main.go:141] libmachine: STDERR: 
	I0415 05:19:43.912723   16403 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/disk.qcow2
	I0415 05:19:43.912741   16403 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:19:43.912769   16403 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:db:d8:c4:ce:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/disk.qcow2
	I0415 05:19:43.914584   16403 main.go:141] libmachine: STDOUT: 
	I0415 05:19:43.914598   16403 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:19:43.914611   16403 client.go:171] duration metric: took 224.600541ms to LocalClient.Create
	I0415 05:19:45.916722   16403 start.go:128] duration metric: took 2.284866959s to createHost
	I0415 05:19:45.916818   16403 start.go:83] releasing machines lock for "default-k8s-diff-port-087000", held for 2.2851805s
	W0415 05:19:45.917158   16403 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-087000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-087000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:19:45.929692   16403 out.go:177] 
	W0415 05:19:45.933870   16403 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:19:45.933909   16403 out.go:239] * 
	* 
	W0415 05:19:45.936196   16403 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:19:45.947540   16403 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-087000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-087000 -n default-k8s-diff-port-087000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-087000 -n default-k8s-diff-port-087000: exit status 7 (67.758625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-087000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-087000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-087000 create -f testdata/busybox.yaml: exit status 1 (30.1435ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-087000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-087000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-087000 -n default-k8s-diff-port-087000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-087000 -n default-k8s-diff-port-087000: exit status 7 (30.386875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-087000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-087000 -n default-k8s-diff-port-087000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-087000 -n default-k8s-diff-port-087000: exit status 7 (30.727583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-087000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-087000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-087000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-087000 describe deploy/metrics-server -n kube-system: exit status 1 (27.210292ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-087000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-087000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-087000 -n default-k8s-diff-port-087000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-087000 -n default-k8s-diff-port-087000: exit status 7 (31.805375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-087000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-087000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-087000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (5.192773875s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-087000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-087000" primary control-plane node in "default-k8s-diff-port-087000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-087000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-087000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:19:50.270842   16464 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:19:50.270969   16464 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:19:50.270972   16464 out.go:304] Setting ErrFile to fd 2...
	I0415 05:19:50.270974   16464 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:19:50.271088   16464 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:19:50.272126   16464 out.go:298] Setting JSON to false
	I0415 05:19:50.288086   16464 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8361,"bootTime":1713175229,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:19:50.288147   16464 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:19:50.292800   16464 out.go:177] * [default-k8s-diff-port-087000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:19:50.297735   16464 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:19:50.297839   16464 notify.go:220] Checking for updates...
	I0415 05:19:50.305686   16464 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:19:50.309735   16464 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:19:50.316715   16464 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:19:50.320699   16464 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:19:50.323744   16464 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:19:50.326920   16464 config.go:182] Loaded profile config "default-k8s-diff-port-087000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:19:50.327180   16464 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:19:50.331682   16464 out.go:177] * Using the qemu2 driver based on existing profile
	I0415 05:19:50.338626   16464 start.go:297] selected driver: qemu2
	I0415 05:19:50.338631   16464 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-087000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-087000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:19:50.338674   16464 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:19:50.340917   16464 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:19:50.340965   16464 cni.go:84] Creating CNI manager for ""
	I0415 05:19:50.340976   16464 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:19:50.340994   16464 start.go:340] cluster config:
	{Name:default-k8s-diff-port-087000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-087000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:19:50.345189   16464 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:19:50.352722   16464 out.go:177] * Starting "default-k8s-diff-port-087000" primary control-plane node in "default-k8s-diff-port-087000" cluster
	I0415 05:19:50.356699   16464 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:19:50.356720   16464 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 05:19:50.356725   16464 cache.go:56] Caching tarball of preloaded images
	I0415 05:19:50.356778   16464 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:19:50.356783   16464 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:19:50.356842   16464 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/default-k8s-diff-port-087000/config.json ...
	I0415 05:19:50.357294   16464 start.go:360] acquireMachinesLock for default-k8s-diff-port-087000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:19:50.357324   16464 start.go:364] duration metric: took 20.667µs to acquireMachinesLock for "default-k8s-diff-port-087000"
	I0415 05:19:50.357332   16464 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:19:50.357339   16464 fix.go:54] fixHost starting: 
	I0415 05:19:50.357442   16464 fix.go:112] recreateIfNeeded on default-k8s-diff-port-087000: state=Stopped err=<nil>
	W0415 05:19:50.357451   16464 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:19:50.359079   16464 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-087000" ...
	I0415 05:19:50.366719   16464 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:db:d8:c4:ce:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/disk.qcow2
	I0415 05:19:50.368569   16464 main.go:141] libmachine: STDOUT: 
	I0415 05:19:50.368587   16464 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:19:50.368612   16464 fix.go:56] duration metric: took 11.273666ms for fixHost
	I0415 05:19:50.368616   16464 start.go:83] releasing machines lock for "default-k8s-diff-port-087000", held for 11.288959ms
	W0415 05:19:50.368621   16464 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:19:50.368658   16464 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:19:50.368663   16464 start.go:728] Will try again in 5 seconds ...
	I0415 05:19:55.370818   16464 start.go:360] acquireMachinesLock for default-k8s-diff-port-087000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:19:55.371186   16464 start.go:364] duration metric: took 258.875µs to acquireMachinesLock for "default-k8s-diff-port-087000"
	I0415 05:19:55.371298   16464 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:19:55.371320   16464 fix.go:54] fixHost starting: 
	I0415 05:19:55.372083   16464 fix.go:112] recreateIfNeeded on default-k8s-diff-port-087000: state=Stopped err=<nil>
	W0415 05:19:55.372109   16464 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:19:55.382786   16464 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-087000" ...
	I0415 05:19:55.387015   16464 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:db:d8:c4:ce:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/default-k8s-diff-port-087000/disk.qcow2
	I0415 05:19:55.394074   16464 main.go:141] libmachine: STDOUT: 
	I0415 05:19:55.394143   16464 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:19:55.394203   16464 fix.go:56] duration metric: took 22.886709ms for fixHost
	I0415 05:19:55.394218   16464 start.go:83] releasing machines lock for "default-k8s-diff-port-087000", held for 23.011083ms
	W0415 05:19:55.394363   16464 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-087000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-087000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:19:55.404758   16464 out.go:177] 
	W0415 05:19:55.409821   16464 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:19:55.409835   16464 out.go:239] * 
	* 
	W0415 05:19:55.411153   16464 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:19:55.424762   16464 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-087000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-087000 -n default-k8s-diff-port-087000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-087000 -n default-k8s-diff-port-087000: exit status 7 (47.589625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-087000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-087000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-087000 -n default-k8s-diff-port-087000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-087000 -n default-k8s-diff-port-087000: exit status 7 (30.935084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-087000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-087000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-087000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-087000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.766042ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-087000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-087000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-087000 -n default-k8s-diff-port-087000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-087000 -n default-k8s-diff-port-087000: exit status 7 (30.39025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-087000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-087000 image list --format=json
start_stop_delete_test.go:304: v1.29.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.3",
- 	"registry.k8s.io/kube-controller-manager:v1.29.3",
- 	"registry.k8s.io/kube-proxy:v1.29.3",
- 	"registry.k8s.io/kube-scheduler:v1.29.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-087000 -n default-k8s-diff-port-087000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-087000 -n default-k8s-diff-port-087000: exit status 7 (31.156042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-087000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-087000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-087000 --alsologtostderr -v=1: exit status 83 (47.6ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-087000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-087000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:19:55.673075   16483 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:19:55.673229   16483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:19:55.673232   16483 out.go:304] Setting ErrFile to fd 2...
	I0415 05:19:55.673234   16483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:19:55.673349   16483 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:19:55.673596   16483 out.go:298] Setting JSON to false
	I0415 05:19:55.673604   16483 mustload.go:65] Loading cluster: default-k8s-diff-port-087000
	I0415 05:19:55.673802   16483 config.go:182] Loaded profile config "default-k8s-diff-port-087000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:19:55.678717   16483 out.go:177] * The control-plane node default-k8s-diff-port-087000 host is not running: state=Stopped
	I0415 05:19:55.687657   16483 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-087000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-087000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-087000 -n default-k8s-diff-port-087000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-087000 -n default-k8s-diff-port-087000: exit status 7 (30.770625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-087000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-087000 -n default-k8s-diff-port-087000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-087000 -n default-k8s-diff-port-087000: exit status 7 (31.171875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-087000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-011000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-011000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-rc.2: exit status 80 (9.760644917s)

                                                
                                                
-- stdout --
	* [newest-cni-011000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-011000" primary control-plane node in "newest-cni-011000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-011000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:19:56.136489   16506 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:19:56.136610   16506 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:19:56.136614   16506 out.go:304] Setting ErrFile to fd 2...
	I0415 05:19:56.136616   16506 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:19:56.136746   16506 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:19:56.137821   16506 out.go:298] Setting JSON to false
	I0415 05:19:56.154149   16506 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8367,"bootTime":1713175229,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:19:56.154207   16506 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:19:56.158851   16506 out.go:177] * [newest-cni-011000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:19:56.165751   16506 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:19:56.165788   16506 notify.go:220] Checking for updates...
	I0415 05:19:56.169810   16506 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:19:56.173868   16506 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:19:56.176772   16506 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:19:56.179849   16506 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:19:56.182880   16506 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:19:56.186155   16506 config.go:182] Loaded profile config "multinode-671000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:19:56.186221   16506 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0415 05:19:56.186264   16506 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:19:56.193797   16506 out.go:177] * Using the qemu2 driver based on user configuration
	I0415 05:19:56.200852   16506 start.go:297] selected driver: qemu2
	I0415 05:19:56.200860   16506 start.go:901] validating driver "qemu2" against <nil>
	I0415 05:19:56.200867   16506 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:19:56.203222   16506 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0415 05:19:56.203248   16506 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0415 05:19:56.211872   16506 out.go:177] * Automatically selected the socket_vmnet network
	I0415 05:19:56.214874   16506 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0415 05:19:56.214903   16506 cni.go:84] Creating CNI manager for ""
	I0415 05:19:56.214911   16506 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:19:56.214918   16506 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 05:19:56.214946   16506 start.go:340] cluster config:
	{Name:newest-cni-011000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:newest-cni-011000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:19:56.219506   16506 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:19:56.226697   16506 out.go:177] * Starting "newest-cni-011000" primary control-plane node in "newest-cni-011000" cluster
	I0415 05:19:56.230837   16506 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 05:19:56.230863   16506 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0415 05:19:56.230869   16506 cache.go:56] Caching tarball of preloaded images
	I0415 05:19:56.230933   16506 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:19:56.230938   16506 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on docker
	I0415 05:19:56.231009   16506 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/newest-cni-011000/config.json ...
	I0415 05:19:56.231023   16506 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/newest-cni-011000/config.json: {Name:mk3d5ad003dbb9618aaefe0b7404f226ddcdeea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:19:56.231310   16506 start.go:360] acquireMachinesLock for newest-cni-011000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:19:56.231347   16506 start.go:364] duration metric: took 31.125µs to acquireMachinesLock for "newest-cni-011000"
	I0415 05:19:56.231360   16506 start.go:93] Provisioning new machine with config: &{Name:newest-cni-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-rc.2 ClusterName:newest-cni-011000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:19:56.231394   16506 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:19:56.238772   16506 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 05:19:56.256795   16506 start.go:159] libmachine.API.Create for "newest-cni-011000" (driver="qemu2")
	I0415 05:19:56.256820   16506 client.go:168] LocalClient.Create starting
	I0415 05:19:56.256886   16506 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:19:56.256919   16506 main.go:141] libmachine: Decoding PEM data...
	I0415 05:19:56.256933   16506 main.go:141] libmachine: Parsing certificate...
	I0415 05:19:56.256970   16506 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:19:56.256993   16506 main.go:141] libmachine: Decoding PEM data...
	I0415 05:19:56.257000   16506 main.go:141] libmachine: Parsing certificate...
	I0415 05:19:56.257380   16506 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:19:56.388572   16506 main.go:141] libmachine: Creating SSH key...
	I0415 05:19:56.441601   16506 main.go:141] libmachine: Creating Disk image...
	I0415 05:19:56.441606   16506 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:19:56.441757   16506 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/disk.qcow2
	I0415 05:19:56.453996   16506 main.go:141] libmachine: STDOUT: 
	I0415 05:19:56.454026   16506 main.go:141] libmachine: STDERR: 
	I0415 05:19:56.454088   16506 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/disk.qcow2 +20000M
	I0415 05:19:56.465161   16506 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:19:56.465182   16506 main.go:141] libmachine: STDERR: 
	I0415 05:19:56.465193   16506 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/disk.qcow2
	I0415 05:19:56.465200   16506 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:19:56.465237   16506 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:e1:28:3e:fa:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/disk.qcow2
	I0415 05:19:56.466857   16506 main.go:141] libmachine: STDOUT: 
	I0415 05:19:56.466877   16506 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:19:56.466906   16506 client.go:171] duration metric: took 210.081208ms to LocalClient.Create
	I0415 05:19:58.469013   16506 start.go:128] duration metric: took 2.237684541s to createHost
	I0415 05:19:58.469089   16506 start.go:83] releasing machines lock for "newest-cni-011000", held for 2.237825084s
	W0415 05:19:58.469147   16506 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:19:58.480052   16506 out.go:177] * Deleting "newest-cni-011000" in qemu2 ...
	W0415 05:19:58.503105   16506 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:19:58.503136   16506 start.go:728] Will try again in 5 seconds ...
	I0415 05:20:03.505239   16506 start.go:360] acquireMachinesLock for newest-cni-011000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:20:03.505907   16506 start.go:364] duration metric: took 480.667µs to acquireMachinesLock for "newest-cni-011000"
	I0415 05:20:03.506031   16506 start.go:93] Provisioning new machine with config: &{Name:newest-cni-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-rc.2 ClusterName:newest-cni-011000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:20:03.506326   16506 start.go:125] createHost starting for "" (driver="qemu2")
	I0415 05:20:03.516881   16506 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 05:20:03.566181   16506 start.go:159] libmachine.API.Create for "newest-cni-011000" (driver="qemu2")
	I0415 05:20:03.566231   16506 client.go:168] LocalClient.Create starting
	I0415 05:20:03.566357   16506 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/ca.pem
	I0415 05:20:03.566418   16506 main.go:141] libmachine: Decoding PEM data...
	I0415 05:20:03.566440   16506 main.go:141] libmachine: Parsing certificate...
	I0415 05:20:03.566505   16506 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-12307/.minikube/certs/cert.pem
	I0415 05:20:03.566550   16506 main.go:141] libmachine: Decoding PEM data...
	I0415 05:20:03.566560   16506 main.go:141] libmachine: Parsing certificate...
	I0415 05:20:03.567045   16506 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso...
	I0415 05:20:03.697295   16506 main.go:141] libmachine: Creating SSH key...
	I0415 05:20:03.795826   16506 main.go:141] libmachine: Creating Disk image...
	I0415 05:20:03.795835   16506 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0415 05:20:03.796010   16506 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/disk.qcow2.raw /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/disk.qcow2
	I0415 05:20:03.808483   16506 main.go:141] libmachine: STDOUT: 
	I0415 05:20:03.808511   16506 main.go:141] libmachine: STDERR: 
	I0415 05:20:03.808559   16506 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/disk.qcow2 +20000M
	I0415 05:20:03.819417   16506 main.go:141] libmachine: STDOUT: Image resized.
	
	I0415 05:20:03.819455   16506 main.go:141] libmachine: STDERR: 
	I0415 05:20:03.819468   16506 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/disk.qcow2
	I0415 05:20:03.819473   16506 main.go:141] libmachine: Starting QEMU VM...
	I0415 05:20:03.819505   16506 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:f6:f3:ea:72:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/disk.qcow2
	I0415 05:20:03.821088   16506 main.go:141] libmachine: STDOUT: 
	I0415 05:20:03.821106   16506 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:20:03.821120   16506 client.go:171] duration metric: took 254.895042ms to LocalClient.Create
	I0415 05:20:05.823213   16506 start.go:128] duration metric: took 2.316948833s to createHost
	I0415 05:20:05.823267   16506 start.go:83] releasing machines lock for "newest-cni-011000", held for 2.317429917s
	W0415 05:20:05.823624   16506 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-011000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-011000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:20:05.837174   16506 out.go:177] 
	W0415 05:20:05.841370   16506 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:20:05.841432   16506 out.go:239] * 
	* 
	W0415 05:20:05.844233   16506 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:20:05.854282   16506 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-011000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-011000 -n newest-cni-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-011000 -n newest-cni-011000: exit status 7 (70.505042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-011000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-011000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-rc.2: exit status 80 (5.2189965s)

                                                
                                                
-- stdout --
	* [newest-cni-011000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-011000" primary control-plane node in "newest-cni-011000" cluster
	* Restarting existing qemu2 VM for "newest-cni-011000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-011000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:20:09.389171   16553 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:20:09.389303   16553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:20:09.389306   16553 out.go:304] Setting ErrFile to fd 2...
	I0415 05:20:09.389309   16553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:20:09.389430   16553 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:20:09.390504   16553 out.go:298] Setting JSON to false
	I0415 05:20:09.406374   16553 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8380,"bootTime":1713175229,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 05:20:09.406431   16553 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:20:09.411600   16553 out.go:177] * [newest-cni-011000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 05:20:09.423526   16553 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:20:09.418673   16553 notify.go:220] Checking for updates...
	I0415 05:20:09.430389   16553 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 05:20:09.437598   16553 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 05:20:09.445489   16553 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:20:09.453399   16553 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 05:20:09.461509   16553 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:20:09.464836   16553 config.go:182] Loaded profile config "newest-cni-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-rc.2
	I0415 05:20:09.465109   16553 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:20:09.468530   16553 out.go:177] * Using the qemu2 driver based on existing profile
	I0415 05:20:09.476467   16553 start.go:297] selected driver: qemu2
	I0415 05:20:09.476473   16553 start.go:901] validating driver "qemu2" against &{Name:newest-cni-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0-rc.2 ClusterName:newest-cni-011000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:20:09.476541   16553 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:20:09.479011   16553 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0415 05:20:09.479064   16553 cni.go:84] Creating CNI manager for ""
	I0415 05:20:09.479073   16553 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 05:20:09.479097   16553 start.go:340] cluster config:
	{Name:newest-cni-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:newest-cni-011000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:20:09.483723   16553 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:20:09.490559   16553 out.go:177] * Starting "newest-cni-011000" primary control-plane node in "newest-cni-011000" cluster
	I0415 05:20:09.494556   16553 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 05:20:09.494591   16553 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0415 05:20:09.494602   16553 cache.go:56] Caching tarball of preloaded images
	I0415 05:20:09.494696   16553 preload.go:173] Found /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 05:20:09.494706   16553 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on docker
	I0415 05:20:09.494776   16553 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/newest-cni-011000/config.json ...
	I0415 05:20:09.495251   16553 start.go:360] acquireMachinesLock for newest-cni-011000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:20:09.495288   16553 start.go:364] duration metric: took 29.292µs to acquireMachinesLock for "newest-cni-011000"
	I0415 05:20:09.495298   16553 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:20:09.495305   16553 fix.go:54] fixHost starting: 
	I0415 05:20:09.495438   16553 fix.go:112] recreateIfNeeded on newest-cni-011000: state=Stopped err=<nil>
	W0415 05:20:09.495448   16553 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:20:09.499483   16553 out.go:177] * Restarting existing qemu2 VM for "newest-cni-011000" ...
	I0415 05:20:09.506636   16553 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:f6:f3:ea:72:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/disk.qcow2
	I0415 05:20:09.508916   16553 main.go:141] libmachine: STDOUT: 
	I0415 05:20:09.508942   16553 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:20:09.508972   16553 fix.go:56] duration metric: took 13.668833ms for fixHost
	I0415 05:20:09.508976   16553 start.go:83] releasing machines lock for "newest-cni-011000", held for 13.68375ms
	W0415 05:20:09.508987   16553 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:20:09.509018   16553 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:20:09.509024   16553 start.go:728] Will try again in 5 seconds ...
	I0415 05:20:14.511002   16553 start.go:360] acquireMachinesLock for newest-cni-011000: {Name:mk069e25d81a3c4e6efa5d5b6be8d541b2ef3b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 05:20:14.511363   16553 start.go:364] duration metric: took 254.791µs to acquireMachinesLock for "newest-cni-011000"
	I0415 05:20:14.511451   16553 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:20:14.511465   16553 fix.go:54] fixHost starting: 
	I0415 05:20:14.512022   16553 fix.go:112] recreateIfNeeded on newest-cni-011000: state=Stopped err=<nil>
	W0415 05:20:14.512046   16553 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 05:20:14.518641   16553 out.go:177] * Restarting existing qemu2 VM for "newest-cni-011000" ...
	I0415 05:20:14.528798   16553 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:f6:f3:ea:72:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18644-12307/.minikube/machines/newest-cni-011000/disk.qcow2
	I0415 05:20:14.538967   16553 main.go:141] libmachine: STDOUT: 
	I0415 05:20:14.539061   16553 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0415 05:20:14.539161   16553 fix.go:56] duration metric: took 27.693167ms for fixHost
	I0415 05:20:14.539182   16553 start.go:83] releasing machines lock for "newest-cni-011000", held for 27.798916ms
	W0415 05:20:14.539391   16553 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-011000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-011000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0415 05:20:14.548532   16553 out.go:177] 
	W0415 05:20:14.552708   16553 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0415 05:20:14.552741   16553 out.go:239] * 
	* 
	W0415 05:20:14.555199   16553 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:20:14.564418   16553 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-011000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-011000 -n newest-cni-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-011000 -n newest-cni-011000: exit status 7 (69.674125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-011000 image list --format=json
start_stop_delete_test.go:304: v1.30.0-rc.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.30.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.30.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-011000 -n newest-cni-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-011000 -n newest-cni-011000: exit status 7 (30.802791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-011000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-011000 --alsologtostderr -v=1: exit status 83 (43.744709ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-011000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-011000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:20:14.753795   16568 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:20:14.753944   16568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:20:14.753948   16568 out.go:304] Setting ErrFile to fd 2...
	I0415 05:20:14.753950   16568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:20:14.754072   16568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 05:20:14.754272   16568 out.go:298] Setting JSON to false
	I0415 05:20:14.754280   16568 mustload.go:65] Loading cluster: newest-cni-011000
	I0415 05:20:14.754466   16568 config.go:182] Loaded profile config "newest-cni-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-rc.2
	I0415 05:20:14.758517   16568 out.go:177] * The control-plane node newest-cni-011000 host is not running: state=Stopped
	I0415 05:20:14.762551   16568 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-011000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-011000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-011000 -n newest-cni-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-011000 -n newest-cni-011000: exit status 7 (30.90425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-011000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-011000 -n newest-cni-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-011000 -n newest-cni-011000: exit status 7 (30.854709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.29.3/json-events 18.74
13 TestDownloadOnly/v1.29.3/preload-exists 0
16 TestDownloadOnly/v1.29.3/kubectl 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.08
18 TestDownloadOnly/v1.29.3/DeleteAll 0.23
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.22
21 TestDownloadOnly/v1.30.0-rc.2/json-events 17.54
22 TestDownloadOnly/v1.30.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.30.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.30.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.30.0-rc.2/DeleteAll 0.23
28 TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds 0.22
30 TestBinaryMirror 0.35
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 10.03
48 TestErrorSpam/start 0.37
49 TestErrorSpam/status 0.1
50 TestErrorSpam/pause 0.13
51 TestErrorSpam/unpause 0.12
52 TestErrorSpam/stop 8.2
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 1.71
64 TestFunctional/serial/CacheCmd/cache/add_local 1.17
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.25
80 TestFunctional/parallel/DryRun 0.28
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.12
102 TestFunctional/parallel/License 0.33
103 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/ImageCommands/Setup 1.42
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
135 TestFunctional/parallel/ProfileCmd/profile_list 0.11
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_addon-resizer_images 0.17
145 TestFunctional/delete_my-image_image 0.04
146 TestFunctional/delete_minikube_cached_images 0.04
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 3.39
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.33
202 TestMainNoArgs 0.03
237 TestNoKubernetes/serial/StartNoK8sWithVersion 0.15
254 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
255 TestNoKubernetes/serial/ProfileList 0.15
256 TestNoKubernetes/serial/Stop 1.96
258 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.06
259 TestStoppedBinaryUpgrade/Setup 0.97
283 TestStartStop/group/old-k8s-version/serial/Stop 2.82
284 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
294 TestStartStop/group/no-preload/serial/Stop 3.27
295 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
305 TestStartStop/group/embed-certs/serial/Stop 2.06
306 TestStoppedBinaryUpgrade/MinikubeLogs 0.75
307 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
317 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.86
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
328 TestStartStop/group/newest-cni/serial/Stop 3.24
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-387000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-387000: exit status 85 (101.296959ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-387000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT |          |
	|         | -p download-only-387000        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=docker     |                      |         |                |                     |          |
	|         | --driver=qemu2                 |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 04:53:17
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 04:53:17.183483   12717 out.go:291] Setting OutFile to fd 1 ...
	I0415 04:53:17.183639   12717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:53:17.183642   12717 out.go:304] Setting ErrFile to fd 2...
	I0415 04:53:17.183644   12717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:53:17.183767   12717 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	W0415 04:53:17.183877   12717 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18644-12307/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18644-12307/.minikube/config/config.json: no such file or directory
	I0415 04:53:17.185111   12717 out.go:298] Setting JSON to true
	I0415 04:53:17.201430   12717 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6768,"bootTime":1713175229,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 04:53:17.201523   12717 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 04:53:17.208138   12717 out.go:97] [download-only-387000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 04:53:17.209810   12717 out.go:169] MINIKUBE_LOCATION=18644
	W0415 04:53:17.208289   12717 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball: no such file or directory
	I0415 04:53:17.208298   12717 notify.go:220] Checking for updates...
	I0415 04:53:17.219069   12717 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 04:53:17.223043   12717 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 04:53:17.226059   12717 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 04:53:17.229028   12717 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	W0415 04:53:17.235033   12717 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 04:53:17.235257   12717 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 04:53:17.237993   12717 out.go:97] Using the qemu2 driver based on user configuration
	I0415 04:53:17.238012   12717 start.go:297] selected driver: qemu2
	I0415 04:53:17.238029   12717 start.go:901] validating driver "qemu2" against <nil>
	I0415 04:53:17.238116   12717 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 04:53:17.241024   12717 out.go:169] Automatically selected the socket_vmnet network
	I0415 04:53:17.246364   12717 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0415 04:53:17.246464   12717 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 04:53:17.246548   12717 cni.go:84] Creating CNI manager for ""
	I0415 04:53:17.246556   12717 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0415 04:53:17.246606   12717 start.go:340] cluster config:
	{Name:download-only-387000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-387000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 04:53:17.251664   12717 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 04:53:17.255082   12717 out.go:97] Downloading VM boot image ...
	I0415 04:53:17.255102   12717 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/iso/arm64/minikube-v1.33.0-1712854267-18621-arm64.iso
	I0415 04:53:22.479828   12717 out.go:97] Starting "download-only-387000" primary control-plane node in "download-only-387000" cluster
	I0415 04:53:22.479853   12717 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 04:53:22.533703   12717 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0415 04:53:22.533716   12717 cache.go:56] Caching tarball of preloaded images
	I0415 04:53:22.534201   12717 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 04:53:22.538964   12717 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0415 04:53:22.538979   12717 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0415 04:53:22.611304   12717 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0415 04:53:28.205077   12717 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0415 04:53:28.205238   12717 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0415 04:53:28.900854   12717 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0415 04:53:28.901059   12717 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/download-only-387000/config.json ...
	I0415 04:53:28.901085   12717 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/download-only-387000/config.json: {Name:mk6b93fa5cb41271a937c1d2300d331d246071d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 04:53:28.901310   12717 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 04:53:28.901491   12717 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0415 04:53:29.586820   12717 out.go:169] 
	W0415 04:53:29.592947   12717 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18644-12307/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109127260 0x109127260 0x109127260 0x109127260 0x109127260 0x109127260 0x109127260] Decompressors:map[bz2:0x1400058e348 gz:0x1400058e3d0 tar:0x1400058e380 tar.bz2:0x1400058e390 tar.gz:0x1400058e3a0 tar.xz:0x1400058e3b0 tar.zst:0x1400058e3c0 tbz2:0x1400058e390 tgz:0x1400058e3a0 txz:0x1400058e3b0 tzst:0x1400058e3c0 xz:0x1400058e3d8 zip:0x1400058e3e0 zst:0x1400058e3f0] Getters:map[file:0x14002504640 http:0x140004ba1e0 https:0x140004ba230] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0415 04:53:29.592972   12717 out_reason.go:110] 
	W0415 04:53:29.599817   12717 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 04:53:29.603851   12717 out.go:169] 
	
	
	* The control-plane node download-only-387000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-387000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-387000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (18.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-219000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-219000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=qemu2 : (18.74135925s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (18.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
--- PASS: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-219000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-219000: exit status 85 (78.97975ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-387000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT |                     |
	|         | -p download-only-387000        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=qemu2                 |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT | 15 Apr 24 04:53 PDT |
	| delete  | -p download-only-387000        | download-only-387000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT | 15 Apr 24 04:53 PDT |
	| start   | -o=json --download-only        | download-only-219000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT |                     |
	|         | -p download-only-219000        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=qemu2                 |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 04:53:30
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 04:53:30.284336   12752 out.go:291] Setting OutFile to fd 1 ...
	I0415 04:53:30.284475   12752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:53:30.284478   12752 out.go:304] Setting ErrFile to fd 2...
	I0415 04:53:30.284481   12752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:53:30.284620   12752 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 04:53:30.285632   12752 out.go:298] Setting JSON to true
	I0415 04:53:30.301800   12752 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6781,"bootTime":1713175229,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 04:53:30.301863   12752 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 04:53:30.305149   12752 out.go:97] [download-only-219000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 04:53:30.309149   12752 out.go:169] MINIKUBE_LOCATION=18644
	I0415 04:53:30.305248   12752 notify.go:220] Checking for updates...
	I0415 04:53:30.321119   12752 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 04:53:30.324152   12752 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 04:53:30.327196   12752 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 04:53:30.330129   12752 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	W0415 04:53:30.336211   12752 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 04:53:30.336401   12752 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 04:53:30.339147   12752 out.go:97] Using the qemu2 driver based on user configuration
	I0415 04:53:30.339158   12752 start.go:297] selected driver: qemu2
	I0415 04:53:30.339162   12752 start.go:901] validating driver "qemu2" against <nil>
	I0415 04:53:30.339212   12752 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 04:53:30.342090   12752 out.go:169] Automatically selected the socket_vmnet network
	I0415 04:53:30.347558   12752 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0415 04:53:30.347649   12752 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 04:53:30.347687   12752 cni.go:84] Creating CNI manager for ""
	I0415 04:53:30.347697   12752 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 04:53:30.347704   12752 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 04:53:30.347749   12752 start.go:340] cluster config:
	{Name:download-only-219000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-219000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 04:53:30.352303   12752 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 04:53:30.355181   12752 out.go:97] Starting "download-only-219000" primary control-plane node in "download-only-219000" cluster
	I0415 04:53:30.355195   12752 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 04:53:30.409301   12752 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 04:53:30.409315   12752 cache.go:56] Caching tarball of preloaded images
	I0415 04:53:30.409497   12752 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 04:53:30.413720   12752 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0415 04:53:30.413727   12752 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0415 04:53:30.485467   12752 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4?checksum=md5:c0bb0715201da444334d968c298f45eb -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 04:53:35.303894   12752 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0415 04:53:35.304179   12752 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0415 04:53:35.860828   12752 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 04:53:35.861086   12752 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/download-only-219000/config.json ...
	I0415 04:53:35.861104   12752 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/download-only-219000/config.json: {Name:mk72d6a726c76232b8dc1b3eb6c40d87f259a23d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 04:53:35.861352   12752 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 04:53:35.861471   12752 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/darwin/arm64/v1.29.3/kubectl
	
	
	* The control-plane node download-only-219000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-219000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-219000
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/json-events (17.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-370000 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-370000 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=docker --driver=qemu2 : (17.540351209s)
--- PASS: TestDownloadOnly/v1.30.0-rc.2/json-events (17.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.30.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-370000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-370000: exit status 85 (76.194333ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-387000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT |                     |
	|         | -p download-only-387000           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	|         | --driver=qemu2                    |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT | 15 Apr 24 04:53 PDT |
	| delete  | -p download-only-387000           | download-only-387000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT | 15 Apr 24 04:53 PDT |
	| start   | -o=json --download-only           | download-only-219000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT |                     |
	|         | -p download-only-219000           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3      |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	|         | --driver=qemu2                    |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT | 15 Apr 24 04:53 PDT |
	| delete  | -p download-only-219000           | download-only-219000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT | 15 Apr 24 04:53 PDT |
	| start   | -o=json --download-only           | download-only-370000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:53 PDT |                     |
	|         | -p download-only-370000           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2 |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	|         | --driver=qemu2                    |                      |         |                |                     |                     |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 04:53:49
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 04:53:49.554909   12789 out.go:291] Setting OutFile to fd 1 ...
	I0415 04:53:49.555054   12789 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:53:49.555058   12789 out.go:304] Setting ErrFile to fd 2...
	I0415 04:53:49.555060   12789 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:53:49.555178   12789 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 04:53:49.556239   12789 out.go:298] Setting JSON to true
	I0415 04:53:49.572418   12789 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6800,"bootTime":1713175229,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 04:53:49.572484   12789 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 04:53:49.577469   12789 out.go:97] [download-only-370000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 04:53:49.581371   12789 out.go:169] MINIKUBE_LOCATION=18644
	I0415 04:53:49.577581   12789 notify.go:220] Checking for updates...
	I0415 04:53:49.589363   12789 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 04:53:49.592341   12789 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 04:53:49.595376   12789 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 04:53:49.598374   12789 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	W0415 04:53:49.604365   12789 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 04:53:49.604554   12789 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 04:53:49.607341   12789 out.go:97] Using the qemu2 driver based on user configuration
	I0415 04:53:49.607348   12789 start.go:297] selected driver: qemu2
	I0415 04:53:49.607351   12789 start.go:901] validating driver "qemu2" against <nil>
	I0415 04:53:49.607390   12789 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 04:53:49.610298   12789 out.go:169] Automatically selected the socket_vmnet network
	I0415 04:53:49.615484   12789 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0415 04:53:49.615583   12789 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 04:53:49.615621   12789 cni.go:84] Creating CNI manager for ""
	I0415 04:53:49.615629   12789 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 04:53:49.615640   12789 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 04:53:49.615687   12789 start.go:340] cluster config:
	{Name:download-only-370000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:download-only-370000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 04:53:49.619884   12789 iso.go:125] acquiring lock: {Name:mk1114ba859cf378ee323413999f1a034fea1e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 04:53:49.622339   12789 out.go:97] Starting "download-only-370000" primary control-plane node in "download-only-370000" cluster
	I0415 04:53:49.622345   12789 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 04:53:49.673175   12789 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0415 04:53:49.673199   12789 cache.go:56] Caching tarball of preloaded images
	I0415 04:53:49.673365   12789 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 04:53:49.677449   12789 out.go:97] Downloading Kubernetes v1.30.0-rc.2 preload ...
	I0415 04:53:49.677456   12789 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0415 04:53:49.746125   12789 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-arm64.tar.lz4?checksum=md5:f0cbac72359c845c6afc5b35133f3fed -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0415 04:53:53.875820   12789 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0415 04:53:53.876000   12789 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0415 04:53:54.419237   12789 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on docker
	I0415 04:53:54.419431   12789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/download-only-370000/config.json ...
	I0415 04:53:54.419448   12789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-12307/.minikube/profiles/download-only-370000/config.json: {Name:mkbb22b1d05e46bf867e4b54b480d9871170d1a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 04:53:54.420526   12789 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 04:53:54.420643   12789 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-rc.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18644-12307/.minikube/cache/darwin/arm64/v1.30.0-rc.2/kubectl
	
	
	* The control-plane node download-only-370000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-370000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-370000
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.35s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-655000 --alsologtostderr --binary-mirror http://127.0.0.1:52001 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-655000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-655000
--- PASS: TestBinaryMirror (0.35s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-891000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-891000: exit status 85 (59.340042ms)

                                                
                                                
-- stdout --
	* Profile "addons-891000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-891000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-891000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-891000: exit status 85 (63.185792ms)

                                                
                                                
-- stdout --
	* Profile "addons-891000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-891000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.03s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.03s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 status: exit status 7 (32.349958ms)

                                                
                                                
-- stdout --
	nospam-668000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 status: exit status 7 (31.973375ms)

                                                
                                                
-- stdout --
	nospam-668000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 status: exit status 7 (31.950166ms)

                                                
                                                
-- stdout --
	nospam-668000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 pause: exit status 83 (41.569583ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-668000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-668000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 pause: exit status 83 (41.916083ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-668000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-668000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 pause: exit status 83 (41.845583ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-668000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-668000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 unpause: exit status 83 (38.799ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-668000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-668000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 unpause: exit status 83 (41.910166ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-668000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-668000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 unpause: exit status 83 (39.202833ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-668000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-668000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (8.2s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 stop: (3.032012542s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 stop: (3.237389625s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-668000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-668000 stop: (1.924062125s)
--- PASS: TestErrorSpam/stop (8.20s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18644-12307/.minikube/files/etc/test/nested/copy/12715/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-424000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local3125653646/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 cache add minikube-local-cache-test:functional-424000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 cache delete minikube-local-cache-test:functional-424000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-424000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 config get cpus: exit status 14 (32.435291ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 config get cpus: exit status 14 (36.565625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-424000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-424000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (164.576708ms)

                                                
                                                
-- stdout --
	* [functional-424000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 04:55:44.585621   13410 out.go:291] Setting OutFile to fd 1 ...
	I0415 04:55:44.585800   13410 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:55:44.585804   13410 out.go:304] Setting ErrFile to fd 2...
	I0415 04:55:44.585807   13410 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:55:44.585948   13410 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 04:55:44.587153   13410 out.go:298] Setting JSON to false
	I0415 04:55:44.606129   13410 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6915,"bootTime":1713175229,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 04:55:44.606195   13410 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 04:55:44.610158   13410 out.go:177] * [functional-424000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0415 04:55:44.616936   13410 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 04:55:44.616987   13410 notify.go:220] Checking for updates...
	I0415 04:55:44.621090   13410 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 04:55:44.624133   13410 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 04:55:44.625693   13410 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 04:55:44.629101   13410 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 04:55:44.636311   13410 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 04:55:44.640221   13410 config.go:182] Loaded profile config "functional-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 04:55:44.640486   13410 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 04:55:44.648015   13410 out.go:177] * Using the qemu2 driver based on existing profile
	I0415 04:55:44.654036   13410 start.go:297] selected driver: qemu2
	I0415 04:55:44.654044   13410 start.go:901] validating driver "qemu2" against &{Name:functional-424000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-424000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 04:55:44.654124   13410 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 04:55:44.661138   13410 out.go:177] 
	W0415 04:55:44.665083   13410 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0415 04:55:44.669049   13410 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-424000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-424000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-424000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (108.704708ms)

                                                
                                                
-- stdout --
	* [functional-424000] minikube v1.33.0-beta.0 sur Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 04:55:44.819039   13421 out.go:291] Setting OutFile to fd 1 ...
	I0415 04:55:44.819162   13421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:55:44.819164   13421 out.go:304] Setting ErrFile to fd 2...
	I0415 04:55:44.819167   13421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:55:44.819290   13421 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-12307/.minikube/bin
	I0415 04:55:44.820766   13421 out.go:298] Setting JSON to false
	I0415 04:55:44.837441   13421 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6915,"bootTime":1713175229,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0415 04:55:44.837519   13421 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 04:55:44.841208   13421 out.go:177] * [functional-424000] minikube v1.33.0-beta.0 sur Darwin 14.4.1 (arm64)
	I0415 04:55:44.847094   13421 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 04:55:44.851091   13421 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	I0415 04:55:44.847141   13421 notify.go:220] Checking for updates...
	I0415 04:55:44.856942   13421 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0415 04:55:44.860039   13421 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 04:55:44.863062   13421 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	I0415 04:55:44.864506   13421 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 04:55:44.868337   13421 config.go:182] Loaded profile config "functional-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 04:55:44.868595   13421 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 04:55:44.873079   13421 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0415 04:55:44.877963   13421 start.go:297] selected driver: qemu2
	I0415 04:55:44.877970   13421 start.go:901] validating driver "qemu2" against &{Name:functional-424000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-424000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 04:55:44.878024   13421 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 04:55:44.884144   13421 out.go:177] 
	W0415 04:55:44.888067   13421 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0415 04:55:44.892065   13421 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.374175333s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-424000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-424000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 image rm gcr.io/google-containers/addon-resizer:functional-424000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-424000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 image save --daemon gcr.io/google-containers/addon-resizer:functional-424000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-424000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "71.721083ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "35.737541ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "74.041708ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "35.5875ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.013222583s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-424000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-424000
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-424000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-424000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.39s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-002000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-002000 --output=json --user=testUser: (3.39373525s)
--- PASS: TestJSONOutput/stop/Command (3.39s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-404000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-404000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (96.938459ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"81eb84c7-0f80-421e-af18-458cb2ad4637","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-404000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"306bd4ec-590a-4dfd-8ec1-762874ea7acb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18644"}}
	{"specversion":"1.0","id":"e8333a7a-e7cb-48bb-a87b-18379929790d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig"}}
	{"specversion":"1.0","id":"505d4ff0-ee76-4810-af6b-9dba249f854b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"f8d819c3-49ff-4fe7-800b-cb9eb136a27b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"42e603a7-faa1-4b4d-96bc-33a99979f608","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube"}}
	{"specversion":"1.0","id":"a0d1a7a0-2e52-4ac5-924c-be171a0fd8c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"db879ebc-ca6a-43f2-997c-f7d639ec37b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-404000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-404000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-683000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-683000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (145.086709ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-683000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-12307/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-12307/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-683000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-683000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (46.822583ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-683000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-683000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-683000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-683000: (1.958186791s)
--- PASS: TestNoKubernetes/serial/Stop (1.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-683000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-683000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (60.538416ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-683000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-683000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-533000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-533000 --alsologtostderr -v=3: (2.81850575s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000: exit status 7 (51.401375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-533000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-680000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-680000 --alsologtostderr -v=3: (3.268355375s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-680000 -n no-preload-680000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-680000 -n no-preload-680000: exit status 7 (55.318917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-680000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-188000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-188000 --alsologtostderr -v=3: (2.059554417s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-413000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-188000 -n embed-certs-188000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-188000 -n embed-certs-188000: exit status 7 (35.095625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-188000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-087000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-087000 --alsologtostderr -v=3: (3.864170959s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-087000 -n default-k8s-diff-port-087000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-087000 -n default-k8s-diff-port-087000: exit status 7 (56.300417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-087000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-011000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-011000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-011000 --alsologtostderr -v=3: (3.236405833s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-011000 -n newest-cni-011000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-011000 -n newest-cni-011000: exit status 7 (58.156166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-011000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-424000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1283351491/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1713182111327848000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1283351491/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1713182111327848000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1283351491/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1713182111327848000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1283351491/001/test-1713182111327848000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (58.592083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.357ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.99475ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.297ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.29775ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.037458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.903ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "sudo umount -f /mount-9p": exit status 83 (49.782583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-424000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-424000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1283351491/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (10.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-424000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3247694020/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (65.879625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.610792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.706333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.6005ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.6725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.203958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.409292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "sudo umount -f /mount-9p": exit status 83 (47.618458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-424000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-424000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3247694020/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.06s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (11.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-424000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2424928296/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-424000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2424928296/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-424000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2424928296/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T" /mount1: exit status 83 (92.79575ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T" /mount1: exit status 83 (87.787166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T" /mount1: exit status 83 (89.863208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T" /mount1: exit status 83 (88.841917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T" /mount1: exit status 83 (88.062166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T" /mount1: exit status 83 (86.86575ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-424000 ssh "findmnt -T" /mount1: exit status 83 (85.861917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-424000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-424000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2424928296/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-424000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2424928296/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-424000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2424928296/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (11.95s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-033000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-033000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-033000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-033000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-033000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-033000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-033000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-033000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-033000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-033000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-033000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-033000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-033000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-033000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-033000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-033000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-033000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-033000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-033000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-033000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-033000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-033000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-033000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-033000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-033000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-033000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-033000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-033000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-033000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-033000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-033000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-033000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-033000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-033000"

                                                
                                                
----------------------- debugLogs end: cilium-033000 [took: 2.287107958s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-033000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-033000
--- SKIP: TestNetworkPlugins/group/cilium (2.54s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-928000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-928000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
Copied to clipboard