Test Report: QEMU_macOS 18588

                    
                      801f50a102c40cfdc9fc79f6fcbe1cefa0ef9ea3:2024-04-08:33935
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.53
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 9.95
36 TestAddons/Setup 10.26
37 TestCertOptions 10.07
38 TestCertExpiration 195.49
39 TestDockerFlags 10.45
40 TestForceSystemdFlag 10.38
41 TestForceSystemdEnv 10.39
47 TestErrorSpam/setup 9.77
56 TestFunctional/serial/StartWithProxy 9.94
58 TestFunctional/serial/SoftStart 5.26
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.05
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.17
70 TestFunctional/serial/MinikubeKubectlCmd 0.69
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.94
72 TestFunctional/serial/ExtraConfig 5.27
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.09
75 TestFunctional/serial/LogsFileCmd 0.08
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.13
86 TestFunctional/parallel/ServiceCmdConnect 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.13
91 TestFunctional/parallel/CpCmd 0.3
93 TestFunctional/parallel/FileSync 0.08
94 TestFunctional/parallel/CertSync 0.29
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.05
104 TestFunctional/parallel/Version/components 0.04
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
109 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
111 TestFunctional/parallel/DockerEnv/bash 0.05
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
115 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
116 TestFunctional/parallel/ServiceCmd/List 0.05
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.05
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
119 TestFunctional/parallel/ServiceCmd/Format 0.04
120 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.09
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 111.6
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.45
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.34
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.59
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 23.76
150 TestMultiControlPlane/serial/StartCluster 10.1
151 TestMultiControlPlane/serial/DeployApp 109.64
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.07
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.11
156 TestMultiControlPlane/serial/CopyFile 0.06
157 TestMultiControlPlane/serial/StopSecondaryNode 0.12
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.1
159 TestMultiControlPlane/serial/RestartSecondaryNode 48.68
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.1
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.48
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.1
164 TestMultiControlPlane/serial/StopCluster 3.59
165 TestMultiControlPlane/serial/RestartCluster 5.26
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.11
167 TestMultiControlPlane/serial/AddSecondaryNode 0.07
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.11
171 TestImageBuild/serial/Setup 9.92
174 TestJSONOutput/start/Command 9.93
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.05
203 TestMinikubeProfile 10.4
206 TestMountStart/serial/StartWithMountFirst 10.1
209 TestMultiNode/serial/FreshStart2Nodes 9.98
210 TestMultiNode/serial/DeployApp2Nodes 81.69
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.08
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.1
215 TestMultiNode/serial/CopyFile 0.06
216 TestMultiNode/serial/StopNode 0.15
217 TestMultiNode/serial/StartAfterStop 46.83
218 TestMultiNode/serial/RestartKeepsNodes 8.69
219 TestMultiNode/serial/DeleteNode 0.11
220 TestMultiNode/serial/StopMultiNode 3.47
221 TestMultiNode/serial/RestartMultiNode 5.26
222 TestMultiNode/serial/ValidateNameConflict 20.22
226 TestPreload 10.08
228 TestScheduledStopUnix 10.02
229 TestSkaffold 12.13
232 TestRunningBinaryUpgrade 602.47
234 TestKubernetesUpgrade 18.93
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.19
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.47
250 TestStoppedBinaryUpgrade/Upgrade 576.74
252 TestPause/serial/Start 9.95
262 TestNoKubernetes/serial/StartWithK8s 9.86
263 TestNoKubernetes/serial/StartWithStopK8s 5.29
264 TestNoKubernetes/serial/Start 5.28
268 TestNoKubernetes/serial/StartNoArgs 5.3
270 TestNetworkPlugins/group/auto/Start 9.69
271 TestNetworkPlugins/group/kindnet/Start 9.84
272 TestNetworkPlugins/group/calico/Start 9.92
273 TestNetworkPlugins/group/custom-flannel/Start 10.12
274 TestNetworkPlugins/group/false/Start 9.79
275 TestNetworkPlugins/group/enable-default-cni/Start 9.79
276 TestNetworkPlugins/group/flannel/Start 10.16
277 TestNetworkPlugins/group/bridge/Start 9.82
278 TestNetworkPlugins/group/kubenet/Start 9.87
281 TestStartStop/group/old-k8s-version/serial/FirstStart 9.89
282 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
286 TestStartStop/group/old-k8s-version/serial/SecondStart 5.3
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
290 TestStartStop/group/old-k8s-version/serial/Pause 0.11
292 TestStartStop/group/no-preload/serial/FirstStart 9.92
293 TestStartStop/group/no-preload/serial/DeployApp 0.1
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
296 TestStartStop/group/embed-certs/serial/FirstStart 10.1
299 TestStartStop/group/no-preload/serial/SecondStart 6.65
300 TestStartStop/group/embed-certs/serial/DeployApp 0.1
301 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
302 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
303 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
304 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.09
305 TestStartStop/group/no-preload/serial/Pause 0.11
308 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.06
310 TestStartStop/group/embed-certs/serial/SecondStart 6.99
311 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
312 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
313 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.13
315 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
316 TestStartStop/group/embed-certs/serial/Pause 0.12
319 TestStartStop/group/newest-cni/serial/FirstStart 9.89
321 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.59
324 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
325 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.07
327 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
328 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
330 TestStartStop/group/newest-cni/serial/SecondStart 5.26
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
334 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (11.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-465000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-465000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (11.529995791s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f876c597-0dfe-4e39-b0e8-adbd761a7658","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-465000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5600c4b5-81ed-4fc1-bec2-47e0d62ac57e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18588"}}
	{"specversion":"1.0","id":"b5156472-f20c-4b82-a4d6-c6def29b338e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig"}}
	{"specversion":"1.0","id":"88f25e88-63b0-4881-bedc-672212d5037c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"252f8395-2f52-49ac-9aea-55a2671669c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e3e92072-1270-4c63-980f-82a5c46dc070","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube"}}
	{"specversion":"1.0","id":"2a3b5b3c-e9b6-4438-a73e-dd783db841f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"dcb5d6aa-0058-41aa-8d54-07bae9db823b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4d544994-3092-4822-9f3d-9bee22ff2966","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"07bef58a-05e3-4562-859b-4f0ce0023a53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c599a7bc-5593-44ab-8163-fcf4cfda30a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-465000\" primary control-plane node in \"download-only-465000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"20632cf1-fd56-4686-9f8f-1f0c14a81c7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e7fdb6bd-120b-4f0e-9176-63c72835b638","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1067bf240 0x1067bf240 0x1067bf240 0x1067bf240 0x1067bf240 0x1067bf240 0x1067bf240] Decompressors:map[bz2:0x140005b7630 gz:0x140005b7638 tar:0x140005b75e0 tar.bz2:0x140005b75f0 tar.gz:0x140005b7600 tar.xz:0x140005b7610 tar.zst:0x140005b7620 tbz2:0x140005b75f0 tgz:0x14
0005b7600 txz:0x140005b7610 tzst:0x140005b7620 xz:0x140005b7640 zip:0x140005b7650 zst:0x140005b7648] Getters:map[file:0x140022148c0 http:0x140005f65f0 https:0x140005f6640] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"ddf31b2c-5ae6-4b79-877a-da241ccf8575","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:26:13.464156    7751 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:26:13.464298    7751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:26:13.464301    7751 out.go:304] Setting ErrFile to fd 2...
	I0408 04:26:13.464304    7751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:26:13.464418    7751 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	W0408 04:26:13.464511    7751 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18588-7343/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18588-7343/.minikube/config/config.json: no such file or directory
	I0408 04:26:13.465711    7751 out.go:298] Setting JSON to true
	I0408 04:26:13.485199    7751 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5142,"bootTime":1712570431,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:26:13.485260    7751 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:26:13.491061    7751 out.go:97] [download-only-465000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:26:13.496220    7751 out.go:169] MINIKUBE_LOCATION=18588
	I0408 04:26:13.491163    7751 notify.go:220] Checking for updates...
	W0408 04:26:13.491186    7751 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball: no such file or directory
	I0408 04:26:13.505617    7751 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:26:13.510196    7751 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:26:13.513603    7751 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:26:13.517434    7751 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	W0408 04:26:13.524829    7751 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 04:26:13.525025    7751 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:26:13.529329    7751 out.go:97] Using the qemu2 driver based on user configuration
	I0408 04:26:13.529352    7751 start.go:297] selected driver: qemu2
	I0408 04:26:13.529368    7751 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:26:13.529470    7751 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:26:13.533137    7751 out.go:169] Automatically selected the socket_vmnet network
	I0408 04:26:13.539393    7751 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0408 04:26:13.539492    7751 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 04:26:13.539601    7751 cni.go:84] Creating CNI manager for ""
	I0408 04:26:13.539620    7751 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0408 04:26:13.539672    7751 start.go:340] cluster config:
	{Name:download-only-465000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-465000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:26:13.544563    7751 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:26:13.548116    7751 out.go:97] Downloading VM boot image ...
	I0408 04:26:13.548133    7751 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso
	I0408 04:26:17.864369    7751 out.go:97] Starting "download-only-465000" primary control-plane node in "download-only-465000" cluster
	I0408 04:26:17.864396    7751 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0408 04:26:17.920712    7751 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0408 04:26:17.920722    7751 cache.go:56] Caching tarball of preloaded images
	I0408 04:26:17.920939    7751 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0408 04:26:17.927803    7751 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0408 04:26:17.927810    7751 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0408 04:26:18.003040    7751 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0408 04:26:23.528647    7751 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0408 04:26:23.528821    7751 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0408 04:26:24.226335    7751 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0408 04:26:24.226522    7751 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/download-only-465000/config.json ...
	I0408 04:26:24.226551    7751 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/download-only-465000/config.json: {Name:mk5cedd07cfbe42396ac5afb2a307579f9beedc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:26:24.226780    7751 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0408 04:26:24.226956    7751 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0408 04:26:24.910954    7751 out.go:169] 
	W0408 04:26:24.917026    7751 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1067bf240 0x1067bf240 0x1067bf240 0x1067bf240 0x1067bf240 0x1067bf240 0x1067bf240] Decompressors:map[bz2:0x140005b7630 gz:0x140005b7638 tar:0x140005b75e0 tar.bz2:0x140005b75f0 tar.gz:0x140005b7600 tar.xz:0x140005b7610 tar.zst:0x140005b7620 tbz2:0x140005b75f0 tgz:0x140005b7600 txz:0x140005b7610 tzst:0x140005b7620 xz:0x140005b7640 zip:0x140005b7650 zst:0x140005b7648] Getters:map[file:0x140022148c0 http:0x140005f65f0 https:0x140005f6640] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0408 04:26:24.917058    7751 out_reason.go:110] 
	W0408 04:26:24.924915    7751 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:26:24.928833    7751 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-465000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (11.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.95s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-018000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-018000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.764044291s)

                                                
                                                
-- stdout --
	* [offline-docker-018000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-018000" primary control-plane node in "offline-docker-018000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-018000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:38:02.928373    9351 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:38:02.928521    9351 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:38:02.928529    9351 out.go:304] Setting ErrFile to fd 2...
	I0408 04:38:02.928531    9351 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:38:02.928684    9351 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:38:02.929806    9351 out.go:298] Setting JSON to false
	I0408 04:38:02.947647    9351 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5851,"bootTime":1712570431,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:38:02.947720    9351 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:38:02.951961    9351 out.go:177] * [offline-docker-018000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:38:02.959850    9351 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:38:02.959880    9351 notify.go:220] Checking for updates...
	I0408 04:38:02.966912    9351 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:38:02.969847    9351 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:38:02.972890    9351 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:38:02.975861    9351 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:38:02.978786    9351 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:38:02.982183    9351 config.go:182] Loaded profile config "multinode-464000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:38:02.982251    9351 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:38:02.985905    9351 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:38:02.992873    9351 start.go:297] selected driver: qemu2
	I0408 04:38:02.992884    9351 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:38:02.992892    9351 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:38:02.995038    9351 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:38:02.997870    9351 out.go:177] * Automatically selected the socket_vmnet network
	I0408 04:38:02.999206    9351 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:38:02.999241    9351 cni.go:84] Creating CNI manager for ""
	I0408 04:38:02.999261    9351 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:38:02.999267    9351 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 04:38:02.999304    9351 start.go:340] cluster config:
	{Name:offline-docker-018000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-018000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:38:03.003572    9351 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:38:03.010911    9351 out.go:177] * Starting "offline-docker-018000" primary control-plane node in "offline-docker-018000" cluster
	I0408 04:38:03.014812    9351 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:38:03.014845    9351 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:38:03.014853    9351 cache.go:56] Caching tarball of preloaded images
	I0408 04:38:03.014937    9351 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:38:03.014943    9351 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:38:03.015010    9351 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/offline-docker-018000/config.json ...
	I0408 04:38:03.015022    9351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/offline-docker-018000/config.json: {Name:mkcbabae17e4aaf7f787d7cf1fe4b49bbf743a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:38:03.015272    9351 start.go:360] acquireMachinesLock for offline-docker-018000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:38:03.015301    9351 start.go:364] duration metric: took 23.334µs to acquireMachinesLock for "offline-docker-018000"
	I0408 04:38:03.015312    9351 start.go:93] Provisioning new machine with config: &{Name:offline-docker-018000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-018000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:38:03.015344    9351 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:38:03.022857    9351 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0408 04:38:03.038189    9351 start.go:159] libmachine.API.Create for "offline-docker-018000" (driver="qemu2")
	I0408 04:38:03.038230    9351 client.go:168] LocalClient.Create starting
	I0408 04:38:03.038309    9351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:38:03.038343    9351 main.go:141] libmachine: Decoding PEM data...
	I0408 04:38:03.038354    9351 main.go:141] libmachine: Parsing certificate...
	I0408 04:38:03.038401    9351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:38:03.038422    9351 main.go:141] libmachine: Decoding PEM data...
	I0408 04:38:03.038430    9351 main.go:141] libmachine: Parsing certificate...
	I0408 04:38:03.038811    9351 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:38:03.187133    9351 main.go:141] libmachine: Creating SSH key...
	I0408 04:38:03.250375    9351 main.go:141] libmachine: Creating Disk image...
	I0408 04:38:03.250386    9351 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:38:03.250609    9351 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/offline-docker-018000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/offline-docker-018000/disk.qcow2
	I0408 04:38:03.276380    9351 main.go:141] libmachine: STDOUT: 
	I0408 04:38:03.276402    9351 main.go:141] libmachine: STDERR: 
	I0408 04:38:03.276463    9351 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/offline-docker-018000/disk.qcow2 +20000M
	I0408 04:38:03.288393    9351 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:38:03.288418    9351 main.go:141] libmachine: STDERR: 
	I0408 04:38:03.288439    9351 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/offline-docker-018000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/offline-docker-018000/disk.qcow2
	I0408 04:38:03.288445    9351 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:38:03.288477    9351 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/offline-docker-018000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/offline-docker-018000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/offline-docker-018000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:e7:c3:8a:de:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/offline-docker-018000/disk.qcow2
	I0408 04:38:03.290408    9351 main.go:141] libmachine: STDOUT: 
	I0408 04:38:03.290427    9351 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:38:03.290443    9351 client.go:171] duration metric: took 252.208542ms to LocalClient.Create
	I0408 04:38:05.290728    9351 start.go:128] duration metric: took 2.275409542s to createHost
	I0408 04:38:05.290748    9351 start.go:83] releasing machines lock for "offline-docker-018000", held for 2.275474917s
	W0408 04:38:05.290762    9351 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:38:05.299450    9351 out.go:177] * Deleting "offline-docker-018000" in qemu2 ...
	W0408 04:38:05.309471    9351 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:38:05.309480    9351 start.go:728] Will try again in 5 seconds ...
	I0408 04:38:10.311584    9351 start.go:360] acquireMachinesLock for offline-docker-018000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:38:10.311740    9351 start.go:364] duration metric: took 121.75µs to acquireMachinesLock for "offline-docker-018000"
	I0408 04:38:10.311786    9351 start.go:93] Provisioning new machine with config: &{Name:offline-docker-018000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-018000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:38:10.311843    9351 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:38:10.321885    9351 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0408 04:38:10.346896    9351 start.go:159] libmachine.API.Create for "offline-docker-018000" (driver="qemu2")
	I0408 04:38:10.346932    9351 client.go:168] LocalClient.Create starting
	I0408 04:38:10.347004    9351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:38:10.347039    9351 main.go:141] libmachine: Decoding PEM data...
	I0408 04:38:10.347050    9351 main.go:141] libmachine: Parsing certificate...
	I0408 04:38:10.347099    9351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:38:10.347128    9351 main.go:141] libmachine: Decoding PEM data...
	I0408 04:38:10.347141    9351 main.go:141] libmachine: Parsing certificate...
	I0408 04:38:10.347499    9351 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:38:10.498201    9351 main.go:141] libmachine: Creating SSH key...
	I0408 04:38:10.587736    9351 main.go:141] libmachine: Creating Disk image...
	I0408 04:38:10.587741    9351 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:38:10.587914    9351 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/offline-docker-018000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/offline-docker-018000/disk.qcow2
	I0408 04:38:10.600286    9351 main.go:141] libmachine: STDOUT: 
	I0408 04:38:10.600306    9351 main.go:141] libmachine: STDERR: 
	I0408 04:38:10.600369    9351 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/offline-docker-018000/disk.qcow2 +20000M
	I0408 04:38:10.611087    9351 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:38:10.611106    9351 main.go:141] libmachine: STDERR: 
	I0408 04:38:10.611124    9351 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/offline-docker-018000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/offline-docker-018000/disk.qcow2
	I0408 04:38:10.611128    9351 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:38:10.611164    9351 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/offline-docker-018000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/offline-docker-018000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/offline-docker-018000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:76:8d:e7:83:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/offline-docker-018000/disk.qcow2
	I0408 04:38:10.612830    9351 main.go:141] libmachine: STDOUT: 
	I0408 04:38:10.612847    9351 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:38:10.612860    9351 client.go:171] duration metric: took 265.925834ms to LocalClient.Create
	I0408 04:38:12.615024    9351 start.go:128] duration metric: took 2.303188209s to createHost
	I0408 04:38:12.615103    9351 start.go:83] releasing machines lock for "offline-docker-018000", held for 2.303379209s
	W0408 04:38:12.615542    9351 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-018000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-018000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:38:12.628800    9351 out.go:177] 
	W0408 04:38:12.632961    9351 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:38:12.633071    9351 out.go:239] * 
	* 
	W0408 04:38:12.635577    9351 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:38:12.645784    9351 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-018000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-04-08 04:38:12.661945 -0700 PDT m=+719.287938417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-018000 -n offline-docker-018000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-018000 -n offline-docker-018000: exit status 7 (72.355875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-018000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-018000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-018000
--- FAIL: TestOffline (9.95s)

                                                
                                    
x
+
TestAddons/Setup (10.26s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-580000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-580000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.256529666s)

                                                
                                                
-- stdout --
	* [addons-580000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-580000" primary control-plane node in "addons-580000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-580000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:27:04.643952    7911 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:27:04.644103    7911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:27:04.644106    7911 out.go:304] Setting ErrFile to fd 2...
	I0408 04:27:04.644108    7911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:27:04.644239    7911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:27:04.645319    7911 out.go:298] Setting JSON to false
	I0408 04:27:04.661444    7911 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5193,"bootTime":1712570431,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:27:04.661507    7911 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:27:04.666497    7911 out.go:177] * [addons-580000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:27:04.673539    7911 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:27:04.673586    7911 notify.go:220] Checking for updates...
	I0408 04:27:04.680423    7911 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:27:04.683536    7911 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:27:04.686441    7911 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:27:04.689919    7911 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:27:04.693447    7911 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:27:04.696640    7911 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:27:04.701465    7911 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:27:04.709486    7911 start.go:297] selected driver: qemu2
	I0408 04:27:04.709492    7911 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:27:04.709498    7911 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:27:04.712051    7911 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:27:04.716484    7911 out.go:177] * Automatically selected the socket_vmnet network
	I0408 04:27:04.719543    7911 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:27:04.719605    7911 cni.go:84] Creating CNI manager for ""
	I0408 04:27:04.719616    7911 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:27:04.719621    7911 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 04:27:04.719661    7911 start.go:340] cluster config:
	{Name:addons-580000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-580000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:27:04.724321    7911 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:27:04.731442    7911 out.go:177] * Starting "addons-580000" primary control-plane node in "addons-580000" cluster
	I0408 04:27:04.735427    7911 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:27:04.735441    7911 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:27:04.735449    7911 cache.go:56] Caching tarball of preloaded images
	I0408 04:27:04.735510    7911 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:27:04.735516    7911 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:27:04.735743    7911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/addons-580000/config.json ...
	I0408 04:27:04.735756    7911 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/addons-580000/config.json: {Name:mkc1205d5a9e6015ac7eedb9f14190bbf36cc414 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:27:04.736042    7911 start.go:360] acquireMachinesLock for addons-580000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:27:04.736403    7911 start.go:364] duration metric: took 354.708µs to acquireMachinesLock for "addons-580000"
	I0408 04:27:04.736415    7911 start.go:93] Provisioning new machine with config: &{Name:addons-580000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:addons-580000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:27:04.736455    7911 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:27:04.745545    7911 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0408 04:27:04.764825    7911 start.go:159] libmachine.API.Create for "addons-580000" (driver="qemu2")
	I0408 04:27:04.764854    7911 client.go:168] LocalClient.Create starting
	I0408 04:27:04.765007    7911 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:27:04.809920    7911 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:27:04.942787    7911 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:27:05.186597    7911 main.go:141] libmachine: Creating SSH key...
	I0408 04:27:05.305989    7911 main.go:141] libmachine: Creating Disk image...
	I0408 04:27:05.305997    7911 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:27:05.306189    7911 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/addons-580000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/addons-580000/disk.qcow2
	I0408 04:27:05.318625    7911 main.go:141] libmachine: STDOUT: 
	I0408 04:27:05.318647    7911 main.go:141] libmachine: STDERR: 
	I0408 04:27:05.318710    7911 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/addons-580000/disk.qcow2 +20000M
	I0408 04:27:05.329374    7911 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:27:05.329391    7911 main.go:141] libmachine: STDERR: 
	I0408 04:27:05.329407    7911 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/addons-580000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/addons-580000/disk.qcow2
	I0408 04:27:05.329411    7911 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:27:05.329449    7911 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/addons-580000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/addons-580000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/addons-580000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:5a:c7:6b:23:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/addons-580000/disk.qcow2
	I0408 04:27:05.331193    7911 main.go:141] libmachine: STDOUT: 
	I0408 04:27:05.331209    7911 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:27:05.331228    7911 client.go:171] duration metric: took 566.3725ms to LocalClient.Create
	I0408 04:27:07.333378    7911 start.go:128] duration metric: took 2.596937791s to createHost
	I0408 04:27:07.333442    7911 start.go:83] releasing machines lock for "addons-580000", held for 2.597065834s
	W0408 04:27:07.333551    7911 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:27:07.348904    7911 out.go:177] * Deleting "addons-580000" in qemu2 ...
	W0408 04:27:07.375516    7911 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:27:07.375539    7911 start.go:728] Will try again in 5 seconds ...
	I0408 04:27:12.377649    7911 start.go:360] acquireMachinesLock for addons-580000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:27:12.377912    7911 start.go:364] duration metric: took 153.709µs to acquireMachinesLock for "addons-580000"
	I0408 04:27:12.377972    7911 start.go:93] Provisioning new machine with config: &{Name:addons-580000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:addons-580000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:27:12.378161    7911 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:27:12.390169    7911 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0408 04:27:12.432352    7911 start.go:159] libmachine.API.Create for "addons-580000" (driver="qemu2")
	I0408 04:27:12.432411    7911 client.go:168] LocalClient.Create starting
	I0408 04:27:12.432530    7911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:27:12.432589    7911 main.go:141] libmachine: Decoding PEM data...
	I0408 04:27:12.432621    7911 main.go:141] libmachine: Parsing certificate...
	I0408 04:27:12.432712    7911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:27:12.432770    7911 main.go:141] libmachine: Decoding PEM data...
	I0408 04:27:12.432782    7911 main.go:141] libmachine: Parsing certificate...
	I0408 04:27:12.433341    7911 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:27:12.591032    7911 main.go:141] libmachine: Creating SSH key...
	I0408 04:27:12.795032    7911 main.go:141] libmachine: Creating Disk image...
	I0408 04:27:12.795039    7911 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:27:12.795240    7911 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/addons-580000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/addons-580000/disk.qcow2
	I0408 04:27:12.808077    7911 main.go:141] libmachine: STDOUT: 
	I0408 04:27:12.808102    7911 main.go:141] libmachine: STDERR: 
	I0408 04:27:12.808174    7911 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/addons-580000/disk.qcow2 +20000M
	I0408 04:27:12.819316    7911 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:27:12.819367    7911 main.go:141] libmachine: STDERR: 
	I0408 04:27:12.819380    7911 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/addons-580000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/addons-580000/disk.qcow2
	I0408 04:27:12.819385    7911 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:27:12.819417    7911 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/addons-580000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/addons-580000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/addons-580000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:a1:93:fe:fc:cf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/addons-580000/disk.qcow2
	I0408 04:27:12.821182    7911 main.go:141] libmachine: STDOUT: 
	I0408 04:27:12.821221    7911 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:27:12.821240    7911 client.go:171] duration metric: took 388.827625ms to LocalClient.Create
	I0408 04:27:14.823463    7911 start.go:128] duration metric: took 2.4452955s to createHost
	I0408 04:27:14.823538    7911 start.go:83] releasing machines lock for "addons-580000", held for 2.445644917s
	W0408 04:27:14.823850    7911 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-580000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-580000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:27:14.833225    7911 out.go:177] 
	W0408 04:27:14.841301    7911 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:27:14.841365    7911 out.go:239] * 
	* 
	W0408 04:27:14.844035    7911 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:27:14.854248    7911 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-580000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.26s)

                                                
                                    
x
+
TestCertOptions (10.07s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-519000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-519000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.776865084s)

                                                
                                                
-- stdout --
	* [cert-options-519000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-519000" primary control-plane node in "cert-options-519000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-519000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-519000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-519000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-519000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-519000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.035583ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-519000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-519000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-519000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-519000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-519000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-519000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (44.346417ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-519000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-519000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-519000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-519000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-519000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-04-08 04:38:43.618862 -0700 PDT m=+750.245289917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-519000 -n cert-options-519000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-519000 -n cert-options-519000: exit status 7 (32.39825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-519000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-519000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-519000
--- FAIL: TestCertOptions (10.07s)

                                                
                                    
x
+
TestCertExpiration (195.49s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-040000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-040000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.130860292s)

                                                
                                                
-- stdout --
	* [cert-expiration-040000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-040000" primary control-plane node in "cert-expiration-040000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-040000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-040000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-040000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-040000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-040000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.222572667s)

                                                
                                                
-- stdout --
	* [cert-expiration-040000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-040000" primary control-plane node in "cert-expiration-040000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-040000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-040000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-040000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-040000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-040000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-040000" primary control-plane node in "cert-expiration-040000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-040000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-040000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-040000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-04-08 04:41:43.798936 -0700 PDT m=+930.427893626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-040000 -n cert-expiration-040000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-040000 -n cert-expiration-040000: exit status 7 (32.329333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-040000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-040000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-040000
--- FAIL: TestCertExpiration (195.49s)

                                                
                                    
x
+
TestDockerFlags (10.45s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-886000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-886000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.188717s)

                                                
                                                
-- stdout --
	* [docker-flags-886000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-886000" primary control-plane node in "docker-flags-886000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-886000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:38:23.260498    9553 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:38:23.260635    9553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:38:23.260638    9553 out.go:304] Setting ErrFile to fd 2...
	I0408 04:38:23.260641    9553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:38:23.260763    9553 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:38:23.261784    9553 out.go:298] Setting JSON to false
	I0408 04:38:23.278110    9553 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5872,"bootTime":1712570431,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:38:23.278170    9553 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:38:23.283082    9553 out.go:177] * [docker-flags-886000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:38:23.289868    9553 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:38:23.289919    9553 notify.go:220] Checking for updates...
	I0408 04:38:23.294917    9553 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:38:23.296424    9553 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:38:23.299879    9553 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:38:23.302907    9553 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:38:23.305909    9553 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:38:23.309284    9553 config.go:182] Loaded profile config "force-systemd-flag-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:38:23.309353    9553 config.go:182] Loaded profile config "multinode-464000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:38:23.309401    9553 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:38:23.313895    9553 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:38:23.320879    9553 start.go:297] selected driver: qemu2
	I0408 04:38:23.320885    9553 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:38:23.320891    9553 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:38:23.323152    9553 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:38:23.325909    9553 out.go:177] * Automatically selected the socket_vmnet network
	I0408 04:38:23.329028    9553 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0408 04:38:23.329083    9553 cni.go:84] Creating CNI manager for ""
	I0408 04:38:23.329089    9553 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:38:23.329097    9553 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 04:38:23.329127    9553 start.go:340] cluster config:
	{Name:docker-flags-886000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-886000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:38:23.333773    9553 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:38:23.340873    9553 out.go:177] * Starting "docker-flags-886000" primary control-plane node in "docker-flags-886000" cluster
	I0408 04:38:23.343860    9553 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:38:23.343882    9553 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:38:23.343892    9553 cache.go:56] Caching tarball of preloaded images
	I0408 04:38:23.343982    9553 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:38:23.343987    9553 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:38:23.344039    9553 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/docker-flags-886000/config.json ...
	I0408 04:38:23.344055    9553 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/docker-flags-886000/config.json: {Name:mk2a6a7cc30aaff6b5b668e1d4c2523613d5c66b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:38:23.344285    9553 start.go:360] acquireMachinesLock for docker-flags-886000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:38:23.344317    9553 start.go:364] duration metric: took 25.5µs to acquireMachinesLock for "docker-flags-886000"
	I0408 04:38:23.344328    9553 start.go:93] Provisioning new machine with config: &{Name:docker-flags-886000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-886000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:38:23.344367    9553 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:38:23.351695    9553 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0408 04:38:23.369032    9553 start.go:159] libmachine.API.Create for "docker-flags-886000" (driver="qemu2")
	I0408 04:38:23.369060    9553 client.go:168] LocalClient.Create starting
	I0408 04:38:23.369132    9553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:38:23.369160    9553 main.go:141] libmachine: Decoding PEM data...
	I0408 04:38:23.369169    9553 main.go:141] libmachine: Parsing certificate...
	I0408 04:38:23.369205    9553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:38:23.369227    9553 main.go:141] libmachine: Decoding PEM data...
	I0408 04:38:23.369236    9553 main.go:141] libmachine: Parsing certificate...
	I0408 04:38:23.369611    9553 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:38:23.516742    9553 main.go:141] libmachine: Creating SSH key...
	I0408 04:38:23.563791    9553 main.go:141] libmachine: Creating Disk image...
	I0408 04:38:23.563796    9553 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:38:23.563960    9553 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/docker-flags-886000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/docker-flags-886000/disk.qcow2
	I0408 04:38:23.576539    9553 main.go:141] libmachine: STDOUT: 
	I0408 04:38:23.576560    9553 main.go:141] libmachine: STDERR: 
	I0408 04:38:23.576618    9553 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/docker-flags-886000/disk.qcow2 +20000M
	I0408 04:38:23.587260    9553 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:38:23.587277    9553 main.go:141] libmachine: STDERR: 
	I0408 04:38:23.587296    9553 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/docker-flags-886000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/docker-flags-886000/disk.qcow2
	I0408 04:38:23.587302    9553 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:38:23.587339    9553 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/docker-flags-886000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/docker-flags-886000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/docker-flags-886000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:3e:81:ee:b5:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/docker-flags-886000/disk.qcow2
	I0408 04:38:23.589112    9553 main.go:141] libmachine: STDOUT: 
	I0408 04:38:23.589130    9553 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:38:23.589158    9553 client.go:171] duration metric: took 220.094583ms to LocalClient.Create
	I0408 04:38:25.591305    9553 start.go:128] duration metric: took 2.24695225s to createHost
	I0408 04:38:25.591358    9553 start.go:83] releasing machines lock for "docker-flags-886000", held for 2.247062584s
	W0408 04:38:25.591416    9553 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:38:25.605564    9553 out.go:177] * Deleting "docker-flags-886000" in qemu2 ...
	W0408 04:38:25.640280    9553 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:38:25.640373    9553 start.go:728] Will try again in 5 seconds ...
	I0408 04:38:30.642543    9553 start.go:360] acquireMachinesLock for docker-flags-886000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:38:30.828222    9553 start.go:364] duration metric: took 185.47925ms to acquireMachinesLock for "docker-flags-886000"
	I0408 04:38:30.828312    9553 start.go:93] Provisioning new machine with config: &{Name:docker-flags-886000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-886000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:38:30.828589    9553 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:38:30.837197    9553 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0408 04:38:30.881996    9553 start.go:159] libmachine.API.Create for "docker-flags-886000" (driver="qemu2")
	I0408 04:38:30.882044    9553 client.go:168] LocalClient.Create starting
	I0408 04:38:30.882168    9553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:38:30.882229    9553 main.go:141] libmachine: Decoding PEM data...
	I0408 04:38:30.882250    9553 main.go:141] libmachine: Parsing certificate...
	I0408 04:38:30.882310    9553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:38:30.882354    9553 main.go:141] libmachine: Decoding PEM data...
	I0408 04:38:30.882368    9553 main.go:141] libmachine: Parsing certificate...
	I0408 04:38:30.882913    9553 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:38:31.042421    9553 main.go:141] libmachine: Creating SSH key...
	I0408 04:38:31.336371    9553 main.go:141] libmachine: Creating Disk image...
	I0408 04:38:31.336382    9553 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:38:31.336620    9553 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/docker-flags-886000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/docker-flags-886000/disk.qcow2
	I0408 04:38:31.349666    9553 main.go:141] libmachine: STDOUT: 
	I0408 04:38:31.349689    9553 main.go:141] libmachine: STDERR: 
	I0408 04:38:31.349752    9553 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/docker-flags-886000/disk.qcow2 +20000M
	I0408 04:38:31.360440    9553 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:38:31.360456    9553 main.go:141] libmachine: STDERR: 
	I0408 04:38:31.360470    9553 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/docker-flags-886000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/docker-flags-886000/disk.qcow2
	I0408 04:38:31.360475    9553 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:38:31.360513    9553 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/docker-flags-886000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/docker-flags-886000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/docker-flags-886000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:30:9e:47:a2:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/docker-flags-886000/disk.qcow2
	I0408 04:38:31.362225    9553 main.go:141] libmachine: STDOUT: 
	I0408 04:38:31.362242    9553 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:38:31.362255    9553 client.go:171] duration metric: took 480.213291ms to LocalClient.Create
	I0408 04:38:33.364343    9553 start.go:128] duration metric: took 2.535758916s to createHost
	I0408 04:38:33.364401    9553 start.go:83] releasing machines lock for "docker-flags-886000", held for 2.536175416s
	W0408 04:38:33.364812    9553 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-886000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-886000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:38:33.380602    9553 out.go:177] 
	W0408 04:38:33.389576    9553 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:38:33.389628    9553 out.go:239] * 
	* 
	W0408 04:38:33.392219    9553 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:38:33.402271    9553 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-886000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-886000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-886000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (81.310833ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-886000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-886000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-886000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-886000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-886000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-886000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-886000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-886000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-886000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.840416ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-886000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-886000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-886000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-886000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-886000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-886000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-04-08 04:38:33.548487 -0700 PDT m=+740.174773292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-886000 -n docker-flags-886000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-886000 -n docker-flags-886000: exit status 7 (30.826875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-886000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-886000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-886000
--- FAIL: TestDockerFlags (10.45s)

                                                
                                    
x
+
TestForceSystemdFlag (10.38s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-431000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-431000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.150236125s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-431000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-431000" primary control-plane node in "force-systemd-flag-431000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-431000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:38:18.095687    9527 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:38:18.095820    9527 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:38:18.095823    9527 out.go:304] Setting ErrFile to fd 2...
	I0408 04:38:18.095826    9527 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:38:18.095957    9527 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:38:18.096991    9527 out.go:298] Setting JSON to false
	I0408 04:38:18.113042    9527 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5867,"bootTime":1712570431,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:38:18.113095    9527 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:38:18.119949    9527 out.go:177] * [force-systemd-flag-431000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:38:18.126946    9527 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:38:18.132028    9527 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:38:18.126992    9527 notify.go:220] Checking for updates...
	I0408 04:38:18.137891    9527 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:38:18.141967    9527 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:38:18.144921    9527 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:38:18.147915    9527 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:38:18.151264    9527 config.go:182] Loaded profile config "force-systemd-env-907000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:38:18.151336    9527 config.go:182] Loaded profile config "multinode-464000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:38:18.151379    9527 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:38:18.155892    9527 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:38:18.162906    9527 start.go:297] selected driver: qemu2
	I0408 04:38:18.162911    9527 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:38:18.162917    9527 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:38:18.165327    9527 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:38:18.167782    9527 out.go:177] * Automatically selected the socket_vmnet network
	I0408 04:38:18.171042    9527 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 04:38:18.171090    9527 cni.go:84] Creating CNI manager for ""
	I0408 04:38:18.171097    9527 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:38:18.171102    9527 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 04:38:18.171152    9527 start.go:340] cluster config:
	{Name:force-systemd-flag-431000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:38:18.175754    9527 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:38:18.182863    9527 out.go:177] * Starting "force-systemd-flag-431000" primary control-plane node in "force-systemd-flag-431000" cluster
	I0408 04:38:18.186901    9527 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:38:18.186916    9527 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:38:18.186924    9527 cache.go:56] Caching tarball of preloaded images
	I0408 04:38:18.186992    9527 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:38:18.186997    9527 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:38:18.187059    9527 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/force-systemd-flag-431000/config.json ...
	I0408 04:38:18.187071    9527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/force-systemd-flag-431000/config.json: {Name:mkd62d0dafb1bd0660b48495e4cd88731f1d886e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:38:18.187315    9527 start.go:360] acquireMachinesLock for force-systemd-flag-431000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:38:18.187349    9527 start.go:364] duration metric: took 27.042µs to acquireMachinesLock for "force-systemd-flag-431000"
	I0408 04:38:18.187361    9527 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:38:18.187388    9527 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:38:18.194885    9527 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0408 04:38:18.212778    9527 start.go:159] libmachine.API.Create for "force-systemd-flag-431000" (driver="qemu2")
	I0408 04:38:18.212801    9527 client.go:168] LocalClient.Create starting
	I0408 04:38:18.212855    9527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:38:18.212887    9527 main.go:141] libmachine: Decoding PEM data...
	I0408 04:38:18.212900    9527 main.go:141] libmachine: Parsing certificate...
	I0408 04:38:18.212941    9527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:38:18.212964    9527 main.go:141] libmachine: Decoding PEM data...
	I0408 04:38:18.212977    9527 main.go:141] libmachine: Parsing certificate...
	I0408 04:38:18.213345    9527 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:38:18.361182    9527 main.go:141] libmachine: Creating SSH key...
	I0408 04:38:18.520694    9527 main.go:141] libmachine: Creating Disk image...
	I0408 04:38:18.520701    9527 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:38:18.520880    9527 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-flag-431000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-flag-431000/disk.qcow2
	I0408 04:38:18.533735    9527 main.go:141] libmachine: STDOUT: 
	I0408 04:38:18.533763    9527 main.go:141] libmachine: STDERR: 
	I0408 04:38:18.533811    9527 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-flag-431000/disk.qcow2 +20000M
	I0408 04:38:18.544382    9527 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:38:18.544407    9527 main.go:141] libmachine: STDERR: 
	I0408 04:38:18.544419    9527 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-flag-431000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-flag-431000/disk.qcow2
	I0408 04:38:18.544424    9527 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:38:18.544449    9527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-flag-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-flag-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-flag-431000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:12:32:5e:0e:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-flag-431000/disk.qcow2
	I0408 04:38:18.546170    9527 main.go:141] libmachine: STDOUT: 
	I0408 04:38:18.546186    9527 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:38:18.546205    9527 client.go:171] duration metric: took 333.402ms to LocalClient.Create
	I0408 04:38:20.548333    9527 start.go:128] duration metric: took 2.360963792s to createHost
	I0408 04:38:20.548379    9527 start.go:83] releasing machines lock for "force-systemd-flag-431000", held for 2.36104875s
	W0408 04:38:20.548445    9527 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:38:20.570572    9527 out.go:177] * Deleting "force-systemd-flag-431000" in qemu2 ...
	W0408 04:38:20.592327    9527 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:38:20.592350    9527 start.go:728] Will try again in 5 seconds ...
	I0408 04:38:25.594572    9527 start.go:360] acquireMachinesLock for force-systemd-flag-431000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:38:25.594946    9527 start.go:364] duration metric: took 281.625µs to acquireMachinesLock for "force-systemd-flag-431000"
	I0408 04:38:25.595004    9527 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:38:25.595261    9527 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:38:25.614527    9527 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0408 04:38:25.663018    9527 start.go:159] libmachine.API.Create for "force-systemd-flag-431000" (driver="qemu2")
	I0408 04:38:25.663066    9527 client.go:168] LocalClient.Create starting
	I0408 04:38:25.663221    9527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:38:25.663287    9527 main.go:141] libmachine: Decoding PEM data...
	I0408 04:38:25.663313    9527 main.go:141] libmachine: Parsing certificate...
	I0408 04:38:25.663387    9527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:38:25.663429    9527 main.go:141] libmachine: Decoding PEM data...
	I0408 04:38:25.663445    9527 main.go:141] libmachine: Parsing certificate...
	I0408 04:38:25.664113    9527 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:38:25.819973    9527 main.go:141] libmachine: Creating SSH key...
	I0408 04:38:26.135780    9527 main.go:141] libmachine: Creating Disk image...
	I0408 04:38:26.135793    9527 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:38:26.136010    9527 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-flag-431000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-flag-431000/disk.qcow2
	I0408 04:38:26.149061    9527 main.go:141] libmachine: STDOUT: 
	I0408 04:38:26.149085    9527 main.go:141] libmachine: STDERR: 
	I0408 04:38:26.149143    9527 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-flag-431000/disk.qcow2 +20000M
	I0408 04:38:26.160009    9527 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:38:26.160024    9527 main.go:141] libmachine: STDERR: 
	I0408 04:38:26.160038    9527 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-flag-431000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-flag-431000/disk.qcow2
	I0408 04:38:26.160048    9527 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:38:26.160091    9527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-flag-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-flag-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-flag-431000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:9d:ab:88:c7:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-flag-431000/disk.qcow2
	I0408 04:38:26.161830    9527 main.go:141] libmachine: STDOUT: 
	I0408 04:38:26.161846    9527 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:38:26.161859    9527 client.go:171] duration metric: took 498.79275ms to LocalClient.Create
	I0408 04:38:28.164131    9527 start.go:128] duration metric: took 2.568830875s to createHost
	I0408 04:38:28.164245    9527 start.go:83] releasing machines lock for "force-systemd-flag-431000", held for 2.569312792s
	W0408 04:38:28.164599    9527 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-431000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-431000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:38:28.179104    9527 out.go:177] 
	W0408 04:38:28.188121    9527 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:38:28.188143    9527 out.go:239] * 
	* 
	W0408 04:38:28.189933    9527 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:38:28.201073    9527 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-431000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-431000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-431000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (84.096292ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-431000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-431000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-04-08 04:38:28.30425 -0700 PDT m=+734.930463501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-431000 -n force-systemd-flag-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-431000 -n force-systemd-flag-431000: exit status 7 (35.062666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-431000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-431000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-431000
--- FAIL: TestForceSystemdFlag (10.38s)

                                                
                                    
x
+
TestForceSystemdEnv (10.39s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-907000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-907000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.161228834s)

                                                
                                                
-- stdout --
	* [force-systemd-env-907000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-907000" primary control-plane node in "force-systemd-env-907000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-907000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:38:12.875402    9492 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:38:12.875504    9492 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:38:12.875507    9492 out.go:304] Setting ErrFile to fd 2...
	I0408 04:38:12.875509    9492 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:38:12.875630    9492 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:38:12.876679    9492 out.go:298] Setting JSON to false
	I0408 04:38:12.893602    9492 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5861,"bootTime":1712570431,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:38:12.893675    9492 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:38:12.898167    9492 out.go:177] * [force-systemd-env-907000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:38:12.910996    9492 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:38:12.906086    9492 notify.go:220] Checking for updates...
	I0408 04:38:12.917968    9492 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:38:12.926042    9492 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:38:12.933980    9492 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:38:12.941932    9492 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:38:12.950840    9492 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0408 04:38:12.956374    9492 config.go:182] Loaded profile config "multinode-464000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:38:12.956425    9492 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:38:12.960108    9492 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:38:12.968044    9492 start.go:297] selected driver: qemu2
	I0408 04:38:12.968051    9492 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:38:12.968056    9492 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:38:12.970393    9492 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:38:12.974025    9492 out.go:177] * Automatically selected the socket_vmnet network
	I0408 04:38:12.977105    9492 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 04:38:12.977141    9492 cni.go:84] Creating CNI manager for ""
	I0408 04:38:12.977148    9492 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:38:12.977158    9492 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 04:38:12.977186    9492 start.go:340] cluster config:
	{Name:force-systemd-env-907000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:38:12.981613    9492 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:38:12.989024    9492 out.go:177] * Starting "force-systemd-env-907000" primary control-plane node in "force-systemd-env-907000" cluster
	I0408 04:38:12.993061    9492 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:38:12.993074    9492 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:38:12.993084    9492 cache.go:56] Caching tarball of preloaded images
	I0408 04:38:12.993138    9492 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:38:12.993144    9492 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:38:12.993203    9492 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/force-systemd-env-907000/config.json ...
	I0408 04:38:12.993214    9492 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/force-systemd-env-907000/config.json: {Name:mk52f74ff5db1304cdad290ce4b398724e94f433 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:38:12.993650    9492 start.go:360] acquireMachinesLock for force-systemd-env-907000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:38:12.993683    9492 start.go:364] duration metric: took 25.291µs to acquireMachinesLock for "force-systemd-env-907000"
	I0408 04:38:12.993694    9492 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:38:12.993719    9492 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:38:13.001954    9492 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0408 04:38:13.018715    9492 start.go:159] libmachine.API.Create for "force-systemd-env-907000" (driver="qemu2")
	I0408 04:38:13.018745    9492 client.go:168] LocalClient.Create starting
	I0408 04:38:13.018812    9492 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:38:13.018839    9492 main.go:141] libmachine: Decoding PEM data...
	I0408 04:38:13.018850    9492 main.go:141] libmachine: Parsing certificate...
	I0408 04:38:13.018892    9492 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:38:13.018914    9492 main.go:141] libmachine: Decoding PEM data...
	I0408 04:38:13.018921    9492 main.go:141] libmachine: Parsing certificate...
	I0408 04:38:13.019272    9492 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:38:13.190604    9492 main.go:141] libmachine: Creating SSH key...
	I0408 04:38:13.290490    9492 main.go:141] libmachine: Creating Disk image...
	I0408 04:38:13.290499    9492 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:38:13.290699    9492 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-env-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-env-907000/disk.qcow2
	I0408 04:38:13.303295    9492 main.go:141] libmachine: STDOUT: 
	I0408 04:38:13.303315    9492 main.go:141] libmachine: STDERR: 
	I0408 04:38:13.303408    9492 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-env-907000/disk.qcow2 +20000M
	I0408 04:38:13.314608    9492 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:38:13.314622    9492 main.go:141] libmachine: STDERR: 
	I0408 04:38:13.314631    9492 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-env-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-env-907000/disk.qcow2
	I0408 04:38:13.314638    9492 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:38:13.314668    9492 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-env-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-env-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-env-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:e5:57:03:ec:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-env-907000/disk.qcow2
	I0408 04:38:13.316429    9492 main.go:141] libmachine: STDOUT: 
	I0408 04:38:13.316445    9492 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:38:13.316470    9492 client.go:171] duration metric: took 297.720084ms to LocalClient.Create
	I0408 04:38:15.318742    9492 start.go:128] duration metric: took 2.32502525s to createHost
	I0408 04:38:15.318820    9492 start.go:83] releasing machines lock for "force-systemd-env-907000", held for 2.325159375s
	W0408 04:38:15.318915    9492 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:38:15.330100    9492 out.go:177] * Deleting "force-systemd-env-907000" in qemu2 ...
	W0408 04:38:15.358965    9492 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:38:15.358993    9492 start.go:728] Will try again in 5 seconds ...
	I0408 04:38:20.361168    9492 start.go:360] acquireMachinesLock for force-systemd-env-907000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:38:20.548541    9492 start.go:364] duration metric: took 187.259834ms to acquireMachinesLock for "force-systemd-env-907000"
	I0408 04:38:20.548656    9492 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:38:20.548919    9492 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:38:20.562628    9492 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0408 04:38:20.610618    9492 start.go:159] libmachine.API.Create for "force-systemd-env-907000" (driver="qemu2")
	I0408 04:38:20.610672    9492 client.go:168] LocalClient.Create starting
	I0408 04:38:20.610789    9492 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:38:20.610860    9492 main.go:141] libmachine: Decoding PEM data...
	I0408 04:38:20.610875    9492 main.go:141] libmachine: Parsing certificate...
	I0408 04:38:20.610933    9492 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:38:20.610979    9492 main.go:141] libmachine: Decoding PEM data...
	I0408 04:38:20.610992    9492 main.go:141] libmachine: Parsing certificate...
	I0408 04:38:20.611580    9492 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:38:20.769056    9492 main.go:141] libmachine: Creating SSH key...
	I0408 04:38:20.922023    9492 main.go:141] libmachine: Creating Disk image...
	I0408 04:38:20.922031    9492 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:38:20.922213    9492 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-env-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-env-907000/disk.qcow2
	I0408 04:38:20.935121    9492 main.go:141] libmachine: STDOUT: 
	I0408 04:38:20.935147    9492 main.go:141] libmachine: STDERR: 
	I0408 04:38:20.935203    9492 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-env-907000/disk.qcow2 +20000M
	I0408 04:38:20.946096    9492 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:38:20.946111    9492 main.go:141] libmachine: STDERR: 
	I0408 04:38:20.946121    9492 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-env-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-env-907000/disk.qcow2
	I0408 04:38:20.946126    9492 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:38:20.946161    9492 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-env-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-env-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-env-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:6b:5d:5b:78:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/force-systemd-env-907000/disk.qcow2
	I0408 04:38:20.947892    9492 main.go:141] libmachine: STDOUT: 
	I0408 04:38:20.947910    9492 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:38:20.947922    9492 client.go:171] duration metric: took 337.25ms to LocalClient.Create
	I0408 04:38:22.950191    9492 start.go:128] duration metric: took 2.401235667s to createHost
	I0408 04:38:22.950259    9492 start.go:83] releasing machines lock for "force-systemd-env-907000", held for 2.401702584s
	W0408 04:38:22.950600    9492 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:38:22.968191    9492 out.go:177] 
	W0408 04:38:22.976910    9492 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:38:22.976930    9492 out.go:239] * 
	* 
	W0408 04:38:22.978837    9492 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:38:22.989889    9492 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-907000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-907000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-907000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.6185ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-907000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-907000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-907000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-04-08 04:38:23.08831 -0700 PDT m=+729.714450042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-907000 -n force-systemd-env-907000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-907000 -n force-systemd-env-907000: exit status 7 (35.454375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-907000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-907000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-907000
--- FAIL: TestForceSystemdEnv (10.39s)

                                                
                                    
x
+
TestErrorSpam/setup (9.77s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-294000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-294000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 --driver=qemu2 : exit status 80 (9.769948875s)

                                                
                                                
-- stdout --
	* [nospam-294000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-294000" primary control-plane node in "nospam-294000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-294000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-294000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-294000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-294000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-294000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=18588
- KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-294000" primary control-plane node in "nospam-294000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-294000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-294000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.77s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-756000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-756000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.863949833s)

                                                
                                                
-- stdout --
	* [functional-756000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-756000" primary control-plane node in "functional-756000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-756000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51041 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51041 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51041 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-756000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-756000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-756000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=18588
- KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-756000" primary control-plane node in "functional-756000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-756000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51041 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51041 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51041 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-756000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000: exit status 7 (71.763916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.94s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-756000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-756000 --alsologtostderr -v=8: exit status 80 (5.187624209s)

                                                
                                                
-- stdout --
	* [functional-756000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-756000" primary control-plane node in "functional-756000" cluster
	* Restarting existing qemu2 VM for "functional-756000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-756000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:27:43.600994    8058 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:27:43.601374    8058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:27:43.601379    8058 out.go:304] Setting ErrFile to fd 2...
	I0408 04:27:43.601381    8058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:27:43.601570    8058 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:27:43.602955    8058 out.go:298] Setting JSON to false
	I0408 04:27:43.619446    8058 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5232,"bootTime":1712570431,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:27:43.619509    8058 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:27:43.624788    8058 out.go:177] * [functional-756000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:27:43.631751    8058 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:27:43.631825    8058 notify.go:220] Checking for updates...
	I0408 04:27:43.635817    8058 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:27:43.638716    8058 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:27:43.641705    8058 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:27:43.644778    8058 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:27:43.647710    8058 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:27:43.651006    8058 config.go:182] Loaded profile config "functional-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:27:43.651061    8058 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:27:43.655790    8058 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 04:27:43.662744    8058 start.go:297] selected driver: qemu2
	I0408 04:27:43.662750    8058 start.go:901] validating driver "qemu2" against &{Name:functional-756000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-756000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:27:43.662793    8058 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:27:43.665153    8058 cni.go:84] Creating CNI manager for ""
	I0408 04:27:43.665168    8058 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:27:43.665207    8058 start.go:340] cluster config:
	{Name:functional-756000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-756000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:27:43.669463    8058 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:27:43.677728    8058 out.go:177] * Starting "functional-756000" primary control-plane node in "functional-756000" cluster
	I0408 04:27:43.681736    8058 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:27:43.681748    8058 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:27:43.681755    8058 cache.go:56] Caching tarball of preloaded images
	I0408 04:27:43.681801    8058 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:27:43.681806    8058 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:27:43.681852    8058 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/functional-756000/config.json ...
	I0408 04:27:43.682371    8058 start.go:360] acquireMachinesLock for functional-756000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:27:43.682394    8058 start.go:364] duration metric: took 18.375µs to acquireMachinesLock for "functional-756000"
	I0408 04:27:43.682402    8058 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:27:43.682408    8058 fix.go:54] fixHost starting: 
	I0408 04:27:43.682513    8058 fix.go:112] recreateIfNeeded on functional-756000: state=Stopped err=<nil>
	W0408 04:27:43.682521    8058 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:27:43.689727    8058 out.go:177] * Restarting existing qemu2 VM for "functional-756000" ...
	I0408 04:27:43.693762    8058 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:61:7e:4b:f3:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/disk.qcow2
	I0408 04:27:43.695716    8058 main.go:141] libmachine: STDOUT: 
	I0408 04:27:43.695733    8058 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:27:43.695757    8058 fix.go:56] duration metric: took 13.349166ms for fixHost
	I0408 04:27:43.695762    8058 start.go:83] releasing machines lock for "functional-756000", held for 13.36425ms
	W0408 04:27:43.695768    8058 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:27:43.695801    8058 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:27:43.695806    8058 start.go:728] Will try again in 5 seconds ...
	I0408 04:27:48.697893    8058 start.go:360] acquireMachinesLock for functional-756000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:27:48.698336    8058 start.go:364] duration metric: took 300.417µs to acquireMachinesLock for "functional-756000"
	I0408 04:27:48.698491    8058 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:27:48.698510    8058 fix.go:54] fixHost starting: 
	I0408 04:27:48.699255    8058 fix.go:112] recreateIfNeeded on functional-756000: state=Stopped err=<nil>
	W0408 04:27:48.699280    8058 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:27:48.703734    8058 out.go:177] * Restarting existing qemu2 VM for "functional-756000" ...
	I0408 04:27:48.710949    8058 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:61:7e:4b:f3:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/disk.qcow2
	I0408 04:27:48.720330    8058 main.go:141] libmachine: STDOUT: 
	I0408 04:27:48.720385    8058 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:27:48.720457    8058 fix.go:56] duration metric: took 21.948083ms for fixHost
	I0408 04:27:48.720479    8058 start.go:83] releasing machines lock for "functional-756000", held for 22.097708ms
	W0408 04:27:48.720663    8058 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-756000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-756000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:27:48.729620    8058 out.go:177] 
	W0408 04:27:48.733649    8058 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:27:48.733689    8058 out.go:239] * 
	* 
	W0408 04:27:48.736411    8058 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:27:48.743650    8058 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-756000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.189245666s for "functional-756000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000: exit status 7 (68.994083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (31.3435ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-756000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000: exit status 7 (32.130625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-756000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-756000 get po -A: exit status 1 (26.633625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-756000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-756000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-756000\n"*: args "kubectl --context functional-756000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-756000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000: exit status 7 (32.238209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh sudo crictl images: exit status 83 (46.022625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-756000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (41.867917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-756000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (41.930708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (43.993667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-756000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 kubectl -- --context functional-756000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 kubectl -- --context functional-756000 get pods: exit status 1 (656.358167ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-756000
	* no server found for cluster "functional-756000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-756000 kubectl -- --context functional-756000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000: exit status 7 (34.067417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.69s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-756000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-756000 get pods: exit status 1 (902.868375ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-756000
	* no server found for cluster "functional-756000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-756000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000: exit status 7 (32.127125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.94s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-756000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-756000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.199984417s)

                                                
                                                
-- stdout --
	* [functional-756000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-756000" primary control-plane node in "functional-756000" cluster
	* Restarting existing qemu2 VM for "functional-756000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-756000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-756000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-756000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.200533208s for "functional-756000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000: exit status 7 (69.937833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-756000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-756000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.737625ms)

                                                
                                                
** stderr ** 
	error: context "functional-756000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-756000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000: exit status 7 (32.159958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 logs: exit status 83 (85.193375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-465000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT |                     |
	|         | -p download-only-465000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT | 08 Apr 24 04:26 PDT |
	| delete  | -p download-only-465000                                                  | download-only-465000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT | 08 Apr 24 04:26 PDT |
	| start   | -o=json --download-only                                                  | download-only-878000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT |                     |
	|         | -p download-only-878000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                                             |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT | 08 Apr 24 04:26 PDT |
	| delete  | -p download-only-878000                                                  | download-only-878000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT | 08 Apr 24 04:26 PDT |
	| start   | -o=json --download-only                                                  | download-only-444000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT |                     |
	|         | -p download-only-444000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                                        |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
	| delete  | -p download-only-444000                                                  | download-only-444000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
	| delete  | -p download-only-465000                                                  | download-only-465000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
	| delete  | -p download-only-878000                                                  | download-only-878000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
	| delete  | -p download-only-444000                                                  | download-only-444000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
	| start   | --download-only -p                                                       | binary-mirror-542000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
	|         | binary-mirror-542000                                                     |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
	|         | --binary-mirror                                                          |                      |         |                |                     |                     |
	|         | http://127.0.0.1:51009                                                   |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-542000                                                  | binary-mirror-542000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
	| addons  | enable dashboard -p                                                      | addons-580000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
	|         | addons-580000                                                            |                      |         |                |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-580000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
	|         | addons-580000                                                            |                      |         |                |                     |                     |
	| start   | -p addons-580000 --wait=true                                             | addons-580000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
	|         | --addons=registry                                                        |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
	|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
	| delete  | -p addons-580000                                                         | addons-580000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
	| start   | -p nospam-294000 -n=1 --memory=2250 --wait=false                         | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
	|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| start   | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| start   | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| start   | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| pause   | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| pause   | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| pause   | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| unpause | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| unpause | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| unpause | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| stop    | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| stop    | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| stop    | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| delete  | -p nospam-294000                                                         | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
	| start   | -p functional-756000                                                     | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
	|         | --memory=4000                                                            |                      |         |                |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
	| start   | -p functional-756000                                                     | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
	| cache   | functional-756000 cache add                                              | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
	| cache   | functional-756000 cache add                                              | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
	| cache   | functional-756000 cache add                                              | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | functional-756000 cache add                                              | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
	|         | minikube-local-cache-test:functional-756000                              |                      |         |                |                     |                     |
	| cache   | functional-756000 cache delete                                           | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
	|         | minikube-local-cache-test:functional-756000                              |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
	| ssh     | functional-756000 ssh sudo                                               | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
	|         | crictl images                                                            |                      |         |                |                     |                     |
	| ssh     | functional-756000                                                        | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| ssh     | functional-756000 ssh                                                    | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | functional-756000 cache reload                                           | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
	| ssh     | functional-756000 ssh                                                    | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| kubectl | functional-756000 kubectl --                                             | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
	|         | --context functional-756000                                              |                      |         |                |                     |                     |
	|         | get pods                                                                 |                      |         |                |                     |                     |
	| start   | -p functional-756000                                                     | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
	|         | --wait=all                                                               |                      |         |                |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 04:27:53
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 04:27:53.949972    8136 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:27:53.950115    8136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:27:53.950117    8136 out.go:304] Setting ErrFile to fd 2...
	I0408 04:27:53.950119    8136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:27:53.950250    8136 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:27:53.951253    8136 out.go:298] Setting JSON to false
	I0408 04:27:53.967253    8136 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5242,"bootTime":1712570431,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:27:53.967307    8136 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:27:53.973080    8136 out.go:177] * [functional-756000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:27:53.981977    8136 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:27:53.982032    8136 notify.go:220] Checking for updates...
	I0408 04:27:53.989930    8136 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:27:53.993032    8136 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:27:53.995947    8136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:27:53.998967    8136 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:27:54.001985    8136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:27:54.005272    8136 config.go:182] Loaded profile config "functional-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:27:54.005319    8136 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:27:54.009976    8136 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 04:27:54.018968    8136 start.go:297] selected driver: qemu2
	I0408 04:27:54.018972    8136 start.go:901] validating driver "qemu2" against &{Name:functional-756000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-756000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:27:54.019021    8136 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:27:54.021341    8136 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:27:54.021389    8136 cni.go:84] Creating CNI manager for ""
	I0408 04:27:54.021396    8136 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:27:54.021435    8136 start.go:340] cluster config:
	{Name:functional-756000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-756000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:27:54.025848    8136 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:27:54.035011    8136 out.go:177] * Starting "functional-756000" primary control-plane node in "functional-756000" cluster
	I0408 04:27:54.038013    8136 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:27:54.038028    8136 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:27:54.038035    8136 cache.go:56] Caching tarball of preloaded images
	I0408 04:27:54.038090    8136 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:27:54.038094    8136 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:27:54.038148    8136 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/functional-756000/config.json ...
	I0408 04:27:54.038937    8136 start.go:360] acquireMachinesLock for functional-756000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:27:54.038972    8136 start.go:364] duration metric: took 30.459µs to acquireMachinesLock for "functional-756000"
	I0408 04:27:54.038980    8136 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:27:54.038984    8136 fix.go:54] fixHost starting: 
	I0408 04:27:54.039109    8136 fix.go:112] recreateIfNeeded on functional-756000: state=Stopped err=<nil>
	W0408 04:27:54.039116    8136 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:27:54.048964    8136 out.go:177] * Restarting existing qemu2 VM for "functional-756000" ...
	I0408 04:27:54.056007    8136 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:61:7e:4b:f3:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/disk.qcow2
	I0408 04:27:54.058364    8136 main.go:141] libmachine: STDOUT: 
	I0408 04:27:54.058382    8136 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:27:54.058416    8136 fix.go:56] duration metric: took 19.429833ms for fixHost
	I0408 04:27:54.058420    8136 start.go:83] releasing machines lock for "functional-756000", held for 19.44475ms
	W0408 04:27:54.058424    8136 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:27:54.058454    8136 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:27:54.058473    8136 start.go:728] Will try again in 5 seconds ...
	I0408 04:27:59.060562    8136 start.go:360] acquireMachinesLock for functional-756000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:27:59.060923    8136 start.go:364] duration metric: took 280.917µs to acquireMachinesLock for "functional-756000"
	I0408 04:27:59.061031    8136 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:27:59.061046    8136 fix.go:54] fixHost starting: 
	I0408 04:27:59.061705    8136 fix.go:112] recreateIfNeeded on functional-756000: state=Stopped err=<nil>
	W0408 04:27:59.061723    8136 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:27:59.070044    8136 out.go:177] * Restarting existing qemu2 VM for "functional-756000" ...
	I0408 04:27:59.074202    8136 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:61:7e:4b:f3:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/disk.qcow2
	I0408 04:27:59.083299    8136 main.go:141] libmachine: STDOUT: 
	I0408 04:27:59.083349    8136 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:27:59.083422    8136 fix.go:56] duration metric: took 22.378541ms for fixHost
	I0408 04:27:59.083437    8136 start.go:83] releasing machines lock for "functional-756000", held for 22.502583ms
	W0408 04:27:59.083587    8136 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-756000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:27:59.090077    8136 out.go:177] 
	W0408 04:27:59.094169    8136 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:27:59.094226    8136 out.go:239] * 
	W0408 04:27:59.097123    8136 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:27:59.104919    8136 out.go:177] 
	
	
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-756000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-465000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT |                     |
|         | -p download-only-465000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT | 08 Apr 24 04:26 PDT |
| delete  | -p download-only-465000                                                  | download-only-465000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT | 08 Apr 24 04:26 PDT |
| start   | -o=json --download-only                                                  | download-only-878000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT |                     |
|         | -p download-only-878000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.29.3                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT | 08 Apr 24 04:26 PDT |
| delete  | -p download-only-878000                                                  | download-only-878000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT | 08 Apr 24 04:26 PDT |
| start   | -o=json --download-only                                                  | download-only-444000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT |                     |
|         | -p download-only-444000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.30.0-rc.0                                        |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
| delete  | -p download-only-444000                                                  | download-only-444000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
| delete  | -p download-only-465000                                                  | download-only-465000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
| delete  | -p download-only-878000                                                  | download-only-878000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
| delete  | -p download-only-444000                                                  | download-only-444000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
| start   | --download-only -p                                                       | binary-mirror-542000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | binary-mirror-542000                                                     |                      |         |                |                     |                     |
|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
|         | --binary-mirror                                                          |                      |         |                |                     |                     |
|         | http://127.0.0.1:51009                                                   |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | -p binary-mirror-542000                                                  | binary-mirror-542000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
| addons  | enable dashboard -p                                                      | addons-580000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | addons-580000                                                            |                      |         |                |                     |                     |
| addons  | disable dashboard -p                                                     | addons-580000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | addons-580000                                                            |                      |         |                |                     |                     |
| start   | -p addons-580000 --wait=true                                             | addons-580000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
|         | --addons=registry                                                        |                      |         |                |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
| delete  | -p addons-580000                                                         | addons-580000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
| start   | -p nospam-294000 -n=1 --memory=2250 --wait=false                         | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| start   | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| pause   | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| unpause | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| stop    | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| delete  | -p nospam-294000                                                         | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
| start   | -p functional-756000                                                     | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | --memory=4000                                                            |                      |         |                |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
| start   | -p functional-756000                                                     | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
| cache   | functional-756000 cache add                                              | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | functional-756000 cache add                                              | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | functional-756000 cache add                                              | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-756000 cache add                                              | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
|         | minikube-local-cache-test:functional-756000                              |                      |         |                |                     |                     |
| cache   | functional-756000 cache delete                                           | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
|         | minikube-local-cache-test:functional-756000                              |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
| ssh     | functional-756000 ssh sudo                                               | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | crictl images                                                            |                      |         |                |                     |                     |
| ssh     | functional-756000                                                        | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| ssh     | functional-756000 ssh                                                    | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-756000 cache reload                                           | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
| ssh     | functional-756000 ssh                                                    | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| kubectl | functional-756000 kubectl --                                             | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | --context functional-756000                                              |                      |         |                |                     |                     |
|         | get pods                                                                 |                      |         |                |                     |                     |
| start   | -p functional-756000                                                     | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
|         | --wait=all                                                               |                      |         |                |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/04/08 04:27:53
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0408 04:27:53.949972    8136 out.go:291] Setting OutFile to fd 1 ...
I0408 04:27:53.950115    8136 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 04:27:53.950117    8136 out.go:304] Setting ErrFile to fd 2...
I0408 04:27:53.950119    8136 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 04:27:53.950250    8136 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
I0408 04:27:53.951253    8136 out.go:298] Setting JSON to false
I0408 04:27:53.967253    8136 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5242,"bootTime":1712570431,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0408 04:27:53.967307    8136 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0408 04:27:53.973080    8136 out.go:177] * [functional-756000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
I0408 04:27:53.981977    8136 out.go:177]   - MINIKUBE_LOCATION=18588
I0408 04:27:53.982032    8136 notify.go:220] Checking for updates...
I0408 04:27:53.989930    8136 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
I0408 04:27:53.993032    8136 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0408 04:27:53.995947    8136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0408 04:27:53.998967    8136 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
I0408 04:27:54.001985    8136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0408 04:27:54.005272    8136 config.go:182] Loaded profile config "functional-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 04:27:54.005319    8136 driver.go:392] Setting default libvirt URI to qemu:///system
I0408 04:27:54.009976    8136 out.go:177] * Using the qemu2 driver based on existing profile
I0408 04:27:54.018968    8136 start.go:297] selected driver: qemu2
I0408 04:27:54.018972    8136 start.go:901] validating driver "qemu2" against &{Name:functional-756000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:functional-756000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0408 04:27:54.019021    8136 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0408 04:27:54.021341    8136 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0408 04:27:54.021389    8136 cni.go:84] Creating CNI manager for ""
I0408 04:27:54.021396    8136 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0408 04:27:54.021435    8136 start.go:340] cluster config:
{Name:functional-756000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-756000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0408 04:27:54.025848    8136 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0408 04:27:54.035011    8136 out.go:177] * Starting "functional-756000" primary control-plane node in "functional-756000" cluster
I0408 04:27:54.038013    8136 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
I0408 04:27:54.038028    8136 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
I0408 04:27:54.038035    8136 cache.go:56] Caching tarball of preloaded images
I0408 04:27:54.038090    8136 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0408 04:27:54.038094    8136 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
I0408 04:27:54.038148    8136 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/functional-756000/config.json ...
I0408 04:27:54.038937    8136 start.go:360] acquireMachinesLock for functional-756000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0408 04:27:54.038972    8136 start.go:364] duration metric: took 30.459µs to acquireMachinesLock for "functional-756000"
I0408 04:27:54.038980    8136 start.go:96] Skipping create...Using existing machine configuration
I0408 04:27:54.038984    8136 fix.go:54] fixHost starting: 
I0408 04:27:54.039109    8136 fix.go:112] recreateIfNeeded on functional-756000: state=Stopped err=<nil>
W0408 04:27:54.039116    8136 fix.go:138] unexpected machine state, will restart: <nil>
I0408 04:27:54.048964    8136 out.go:177] * Restarting existing qemu2 VM for "functional-756000" ...
I0408 04:27:54.056007    8136 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:61:7e:4b:f3:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/disk.qcow2
I0408 04:27:54.058364    8136 main.go:141] libmachine: STDOUT: 
I0408 04:27:54.058382    8136 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0408 04:27:54.058416    8136 fix.go:56] duration metric: took 19.429833ms for fixHost
I0408 04:27:54.058420    8136 start.go:83] releasing machines lock for "functional-756000", held for 19.44475ms
W0408 04:27:54.058424    8136 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0408 04:27:54.058454    8136 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0408 04:27:54.058473    8136 start.go:728] Will try again in 5 seconds ...
I0408 04:27:59.060562    8136 start.go:360] acquireMachinesLock for functional-756000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0408 04:27:59.060923    8136 start.go:364] duration metric: took 280.917µs to acquireMachinesLock for "functional-756000"
I0408 04:27:59.061031    8136 start.go:96] Skipping create...Using existing machine configuration
I0408 04:27:59.061046    8136 fix.go:54] fixHost starting: 
I0408 04:27:59.061705    8136 fix.go:112] recreateIfNeeded on functional-756000: state=Stopped err=<nil>
W0408 04:27:59.061723    8136 fix.go:138] unexpected machine state, will restart: <nil>
I0408 04:27:59.070044    8136 out.go:177] * Restarting existing qemu2 VM for "functional-756000" ...
I0408 04:27:59.074202    8136 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:61:7e:4b:f3:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/disk.qcow2
I0408 04:27:59.083299    8136 main.go:141] libmachine: STDOUT: 
I0408 04:27:59.083349    8136 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0408 04:27:59.083422    8136 fix.go:56] duration metric: took 22.378541ms for fixHost
I0408 04:27:59.083437    8136 start.go:83] releasing machines lock for "functional-756000", held for 22.502583ms
W0408 04:27:59.083587    8136 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-756000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0408 04:27:59.090077    8136 out.go:177] 
W0408 04:27:59.094169    8136 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0408 04:27:59.094226    8136 out.go:239] * 
W0408 04:27:59.097123    8136 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0408 04:27:59.104919    8136 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-756000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-756000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd185585787/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-465000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT |                     |
|         | -p download-only-465000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT | 08 Apr 24 04:26 PDT |
| delete  | -p download-only-465000                                                  | download-only-465000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT | 08 Apr 24 04:26 PDT |
| start   | -o=json --download-only                                                  | download-only-878000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT |                     |
|         | -p download-only-878000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.29.3                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT | 08 Apr 24 04:26 PDT |
| delete  | -p download-only-878000                                                  | download-only-878000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT | 08 Apr 24 04:26 PDT |
| start   | -o=json --download-only                                                  | download-only-444000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT |                     |
|         | -p download-only-444000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.30.0-rc.0                                        |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
| delete  | -p download-only-444000                                                  | download-only-444000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
| delete  | -p download-only-465000                                                  | download-only-465000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
| delete  | -p download-only-878000                                                  | download-only-878000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
| delete  | -p download-only-444000                                                  | download-only-444000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
| start   | --download-only -p                                                       | binary-mirror-542000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | binary-mirror-542000                                                     |                      |         |                |                     |                     |
|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
|         | --binary-mirror                                                          |                      |         |                |                     |                     |
|         | http://127.0.0.1:51009                                                   |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | -p binary-mirror-542000                                                  | binary-mirror-542000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
| addons  | enable dashboard -p                                                      | addons-580000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | addons-580000                                                            |                      |         |                |                     |                     |
| addons  | disable dashboard -p                                                     | addons-580000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | addons-580000                                                            |                      |         |                |                     |                     |
| start   | -p addons-580000 --wait=true                                             | addons-580000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
|         | --addons=registry                                                        |                      |         |                |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
| delete  | -p addons-580000                                                         | addons-580000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
| start   | -p nospam-294000 -n=1 --memory=2250 --wait=false                         | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| start   | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| pause   | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| unpause | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| stop    | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-294000 --log_dir                                                  | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| delete  | -p nospam-294000                                                         | nospam-294000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
| start   | -p functional-756000                                                     | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | --memory=4000                                                            |                      |         |                |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
| start   | -p functional-756000                                                     | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
| cache   | functional-756000 cache add                                              | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | functional-756000 cache add                                              | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | functional-756000 cache add                                              | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-756000 cache add                                              | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
|         | minikube-local-cache-test:functional-756000                              |                      |         |                |                     |                     |
| cache   | functional-756000 cache delete                                           | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
|         | minikube-local-cache-test:functional-756000                              |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
| ssh     | functional-756000 ssh sudo                                               | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | crictl images                                                            |                      |         |                |                     |                     |
| ssh     | functional-756000                                                        | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| ssh     | functional-756000 ssh                                                    | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-756000 cache reload                                           | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
| ssh     | functional-756000 ssh                                                    | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT | 08 Apr 24 04:27 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| kubectl | functional-756000 kubectl --                                             | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | --context functional-756000                                              |                      |         |                |                     |                     |
|         | get pods                                                                 |                      |         |                |                     |                     |
| start   | -p functional-756000                                                     | functional-756000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:27 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
|         | --wait=all                                                               |                      |         |                |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/04/08 04:27:53
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0408 04:27:53.949972    8136 out.go:291] Setting OutFile to fd 1 ...
I0408 04:27:53.950115    8136 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 04:27:53.950117    8136 out.go:304] Setting ErrFile to fd 2...
I0408 04:27:53.950119    8136 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 04:27:53.950250    8136 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
I0408 04:27:53.951253    8136 out.go:298] Setting JSON to false
I0408 04:27:53.967253    8136 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5242,"bootTime":1712570431,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0408 04:27:53.967307    8136 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0408 04:27:53.973080    8136 out.go:177] * [functional-756000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
I0408 04:27:53.981977    8136 out.go:177]   - MINIKUBE_LOCATION=18588
I0408 04:27:53.982032    8136 notify.go:220] Checking for updates...
I0408 04:27:53.989930    8136 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
I0408 04:27:53.993032    8136 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0408 04:27:53.995947    8136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0408 04:27:53.998967    8136 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
I0408 04:27:54.001985    8136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0408 04:27:54.005272    8136 config.go:182] Loaded profile config "functional-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 04:27:54.005319    8136 driver.go:392] Setting default libvirt URI to qemu:///system
I0408 04:27:54.009976    8136 out.go:177] * Using the qemu2 driver based on existing profile
I0408 04:27:54.018968    8136 start.go:297] selected driver: qemu2
I0408 04:27:54.018972    8136 start.go:901] validating driver "qemu2" against &{Name:functional-756000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:functional-756000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0408 04:27:54.019021    8136 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0408 04:27:54.021341    8136 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0408 04:27:54.021389    8136 cni.go:84] Creating CNI manager for ""
I0408 04:27:54.021396    8136 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0408 04:27:54.021435    8136 start.go:340] cluster config:
{Name:functional-756000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-756000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0408 04:27:54.025848    8136 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0408 04:27:54.035011    8136 out.go:177] * Starting "functional-756000" primary control-plane node in "functional-756000" cluster
I0408 04:27:54.038013    8136 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
I0408 04:27:54.038028    8136 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
I0408 04:27:54.038035    8136 cache.go:56] Caching tarball of preloaded images
I0408 04:27:54.038090    8136 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0408 04:27:54.038094    8136 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
I0408 04:27:54.038148    8136 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/functional-756000/config.json ...
I0408 04:27:54.038937    8136 start.go:360] acquireMachinesLock for functional-756000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0408 04:27:54.038972    8136 start.go:364] duration metric: took 30.459µs to acquireMachinesLock for "functional-756000"
I0408 04:27:54.038980    8136 start.go:96] Skipping create...Using existing machine configuration
I0408 04:27:54.038984    8136 fix.go:54] fixHost starting: 
I0408 04:27:54.039109    8136 fix.go:112] recreateIfNeeded on functional-756000: state=Stopped err=<nil>
W0408 04:27:54.039116    8136 fix.go:138] unexpected machine state, will restart: <nil>
I0408 04:27:54.048964    8136 out.go:177] * Restarting existing qemu2 VM for "functional-756000" ...
I0408 04:27:54.056007    8136 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:61:7e:4b:f3:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/disk.qcow2
I0408 04:27:54.058364    8136 main.go:141] libmachine: STDOUT: 
I0408 04:27:54.058382    8136 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0408 04:27:54.058416    8136 fix.go:56] duration metric: took 19.429833ms for fixHost
I0408 04:27:54.058420    8136 start.go:83] releasing machines lock for "functional-756000", held for 19.44475ms
W0408 04:27:54.058424    8136 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0408 04:27:54.058454    8136 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0408 04:27:54.058473    8136 start.go:728] Will try again in 5 seconds ...
I0408 04:27:59.060562    8136 start.go:360] acquireMachinesLock for functional-756000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0408 04:27:59.060923    8136 start.go:364] duration metric: took 280.917µs to acquireMachinesLock for "functional-756000"
I0408 04:27:59.061031    8136 start.go:96] Skipping create...Using existing machine configuration
I0408 04:27:59.061046    8136 fix.go:54] fixHost starting: 
I0408 04:27:59.061705    8136 fix.go:112] recreateIfNeeded on functional-756000: state=Stopped err=<nil>
W0408 04:27:59.061723    8136 fix.go:138] unexpected machine state, will restart: <nil>
I0408 04:27:59.070044    8136 out.go:177] * Restarting existing qemu2 VM for "functional-756000" ...
I0408 04:27:59.074202    8136 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:61:7e:4b:f3:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/functional-756000/disk.qcow2
I0408 04:27:59.083299    8136 main.go:141] libmachine: STDOUT: 
I0408 04:27:59.083349    8136 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0408 04:27:59.083422    8136 fix.go:56] duration metric: took 22.378541ms for fixHost
I0408 04:27:59.083437    8136 start.go:83] releasing machines lock for "functional-756000", held for 22.502583ms
W0408 04:27:59.083587    8136 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-756000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0408 04:27:59.090077    8136 out.go:177] 
W0408 04:27:59.094169    8136 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0408 04:27:59.094226    8136 out.go:239] * 
W0408 04:27:59.097123    8136 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0408 04:27:59.104919    8136 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-756000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-756000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.503333ms)

                                                
                                                
** stderr ** 
	error: context "functional-756000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-756000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-756000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-756000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-756000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-756000 --alsologtostderr -v=1] stderr:
I0408 04:28:46.604130    8471 out.go:291] Setting OutFile to fd 1 ...
I0408 04:28:46.604536    8471 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 04:28:46.604540    8471 out.go:304] Setting ErrFile to fd 2...
I0408 04:28:46.604542    8471 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 04:28:46.604721    8471 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
I0408 04:28:46.604945    8471 mustload.go:65] Loading cluster: functional-756000
I0408 04:28:46.605117    8471 config.go:182] Loaded profile config "functional-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 04:28:46.609182    8471 out.go:177] * The control-plane node functional-756000 host is not running: state=Stopped
I0408 04:28:46.613155    8471 out.go:177]   To start a cluster, run: "minikube start -p functional-756000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000: exit status 7 (44.847958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 status: exit status 7 (32.019042ms)

                                                
                                                
-- stdout --
	functional-756000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-756000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (31.894333ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-756000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 status -o json: exit status 7 (32.07075ms)

                                                
                                                
-- stdout --
	{"Name":"functional-756000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-756000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000: exit status 7 (32.152333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-756000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-756000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (27.188459ms)

                                                
                                                
** stderr ** 
	error: context "functional-756000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-756000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-756000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-756000 describe po hello-node-connect: exit status 1 (26.569125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-756000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-756000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-756000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-756000 logs -l app=hello-node-connect: exit status 1 (26.526667ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-756000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-756000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-756000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-756000 describe svc hello-node-connect: exit status 1 (26.693084ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-756000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-756000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000: exit status 7 (32.103708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-756000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000: exit status 7 (32.219875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "echo hello": exit status 83 (47.447083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-756000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-756000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-756000\"\n"*. args "out/minikube-darwin-arm64 -p functional-756000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "cat /etc/hostname": exit status 83 (51.863209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-756000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-756000"- but got *"* The control-plane node functional-756000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-756000\"\n"*. args "out/minikube-darwin-arm64 -p functional-756000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000: exit status 7 (33.459542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (56.587166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-756000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh -n functional-756000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh -n functional-756000 "sudo cat /home/docker/cp-test.txt": exit status 83 (40.920583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-756000 ssh -n functional-756000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-756000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-756000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 cp functional-756000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2054009047/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 cp functional-756000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2054009047/001/cp-test.txt: exit status 83 (44.640041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-756000 cp functional-756000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2054009047/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh -n functional-756000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh -n functional-756000 "sudo cat /home/docker/cp-test.txt": exit status 83 (60.6585ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-756000 ssh -n functional-756000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2054009047/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-756000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-756000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (48.790291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-756000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh -n functional-756000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh -n functional-756000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (43.029ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-756000 ssh -n functional-756000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-756000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-756000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7749/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "sudo cat /etc/test/nested/copy/7749/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "sudo cat /etc/test/nested/copy/7749/hosts": exit status 83 (42.083875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-756000 ssh "sudo cat /etc/test/nested/copy/7749/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-756000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-756000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-756000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-756000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000: exit status 7 (32.574208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7749.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "sudo cat /etc/ssl/certs/7749.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "sudo cat /etc/ssl/certs/7749.pem": exit status 83 (42.308375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/7749.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-756000 ssh \"sudo cat /etc/ssl/certs/7749.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/7749.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-756000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-756000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7749.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "sudo cat /usr/share/ca-certificates/7749.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "sudo cat /usr/share/ca-certificates/7749.pem": exit status 83 (41.750042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/7749.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-756000 ssh \"sudo cat /usr/share/ca-certificates/7749.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/7749.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-756000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-756000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (41.778ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-756000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-756000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-756000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/77492.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "sudo cat /etc/ssl/certs/77492.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "sudo cat /etc/ssl/certs/77492.pem": exit status 83 (45.607875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/77492.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-756000 ssh \"sudo cat /etc/ssl/certs/77492.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/77492.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-756000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-756000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/77492.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "sudo cat /usr/share/ca-certificates/77492.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "sudo cat /usr/share/ca-certificates/77492.pem": exit status 83 (42.718084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/77492.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-756000 ssh \"sudo cat /usr/share/ca-certificates/77492.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/77492.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-756000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-756000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (42.6145ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-756000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-756000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-756000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000: exit status 7 (32.528583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-756000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-756000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.762375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-756000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-756000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-756000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-756000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-756000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-756000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-756000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-756000 -n functional-756000: exit status 7 (33.99225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "sudo systemctl is-active crio": exit status 83 (48.429083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-756000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-756000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 version -o=json --components: exit status 83 (42.981792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-756000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-756000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-756000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-756000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-756000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-756000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-756000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-756000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-756000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-756000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-756000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-756000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-756000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-756000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-756000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-756000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-756000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-756000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-756000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-756000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-756000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-756000 image ls --format short --alsologtostderr:
I0408 04:28:47.024549    8486 out.go:291] Setting OutFile to fd 1 ...
I0408 04:28:47.024709    8486 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 04:28:47.024715    8486 out.go:304] Setting ErrFile to fd 2...
I0408 04:28:47.024718    8486 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 04:28:47.024846    8486 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
I0408 04:28:47.025242    8486 config.go:182] Loaded profile config "functional-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 04:28:47.025304    8486 config.go:182] Loaded profile config "functional-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-756000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-756000 image ls --format table --alsologtostderr:
I0408 04:28:47.257506    8498 out.go:291] Setting OutFile to fd 1 ...
I0408 04:28:47.257671    8498 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 04:28:47.257675    8498 out.go:304] Setting ErrFile to fd 2...
I0408 04:28:47.257677    8498 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 04:28:47.257818    8498 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
I0408 04:28:47.258204    8498 config.go:182] Loaded profile config "functional-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 04:28:47.258267    8498 config.go:182] Loaded profile config "functional-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-756000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-756000 image ls --format json --alsologtostderr:
I0408 04:28:47.220108    8496 out.go:291] Setting OutFile to fd 1 ...
I0408 04:28:47.220259    8496 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 04:28:47.220262    8496 out.go:304] Setting ErrFile to fd 2...
I0408 04:28:47.220265    8496 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 04:28:47.220387    8496 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
I0408 04:28:47.220810    8496 config.go:182] Loaded profile config "functional-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 04:28:47.220877    8496 config.go:182] Loaded profile config "functional-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-756000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-756000 image ls --format yaml --alsologtostderr:
I0408 04:28:47.182488    8494 out.go:291] Setting OutFile to fd 1 ...
I0408 04:28:47.182623    8494 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 04:28:47.182626    8494 out.go:304] Setting ErrFile to fd 2...
I0408 04:28:47.182628    8494 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 04:28:47.182762    8494 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
I0408 04:28:47.183161    8494 config.go:182] Loaded profile config "functional-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 04:28:47.183223    8494 config.go:182] Loaded profile config "functional-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh pgrep buildkitd: exit status 83 (44.672792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 image build -t localhost/my-image:functional-756000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-756000 image build -t localhost/my-image:functional-756000 testdata/build --alsologtostderr:
I0408 04:28:47.107985    8490 out.go:291] Setting OutFile to fd 1 ...
I0408 04:28:47.108459    8490 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 04:28:47.108466    8490 out.go:304] Setting ErrFile to fd 2...
I0408 04:28:47.108469    8490 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 04:28:47.108632    8490 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
I0408 04:28:47.109045    8490 config.go:182] Loaded profile config "functional-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 04:28:47.109478    8490 config.go:182] Loaded profile config "functional-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 04:28:47.109723    8490 build_images.go:133] succeeded building to: 
I0408 04:28:47.109726    8490 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 image ls
functional_test.go:442: expected "localhost/my-image:functional-756000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-756000 docker-env) && out/minikube-darwin-arm64 status -p functional-756000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-756000 docker-env) && out/minikube-darwin-arm64 status -p functional-756000": exit status 1 (45.997125ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 update-context --alsologtostderr -v=2: exit status 83 (44.5375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:28:46.887765    8480 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:28:46.888707    8480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:28:46.888711    8480 out.go:304] Setting ErrFile to fd 2...
	I0408 04:28:46.888714    8480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:28:46.888882    8480 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:28:46.889087    8480 mustload.go:65] Loading cluster: functional-756000
	I0408 04:28:46.889288    8480 config.go:182] Loaded profile config "functional-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:28:46.893712    8480 out.go:177] * The control-plane node functional-756000 host is not running: state=Stopped
	I0408 04:28:46.897593    8480 out.go:177]   To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-756000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-756000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-756000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 update-context --alsologtostderr -v=2: exit status 83 (44.403917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:28:46.980907    8484 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:28:46.981064    8484 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:28:46.981071    8484 out.go:304] Setting ErrFile to fd 2...
	I0408 04:28:46.981073    8484 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:28:46.981210    8484 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:28:46.981431    8484 mustload.go:65] Loading cluster: functional-756000
	I0408 04:28:46.981627    8484 config.go:182] Loaded profile config "functional-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:28:46.984645    8484 out.go:177] * The control-plane node functional-756000 host is not running: state=Stopped
	I0408 04:28:46.988668    8484 out.go:177]   To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-756000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-756000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-756000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 update-context --alsologtostderr -v=2: exit status 83 (45.728042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:28:46.935176    8482 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:28:46.935335    8482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:28:46.935339    8482 out.go:304] Setting ErrFile to fd 2...
	I0408 04:28:46.935341    8482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:28:46.935462    8482 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:28:46.935692    8482 mustload.go:65] Loading cluster: functional-756000
	I0408 04:28:46.935913    8482 config.go:182] Loaded profile config "functional-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:28:46.940682    8482 out.go:177] * The control-plane node functional-756000 host is not running: state=Stopped
	I0408 04:28:46.943560    8482 out.go:177]   To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-756000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-756000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-756000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-756000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-756000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.532084ms)

                                                
                                                
** stderr ** 
	error: context "functional-756000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-756000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 service list: exit status 83 (45.054667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-756000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-756000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-756000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 service list -o json: exit status 83 (54.898ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-756000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 service --namespace=default --https --url hello-node: exit status 83 (44.730833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-756000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 service hello-node --url --format={{.IP}}: exit status 83 (44.83925ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-756000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-756000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-756000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 service hello-node --url: exit status 83 (43.819542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-756000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-756000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-756000"
functional_test.go:1565: failed to parse "* The control-plane node functional-756000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-756000\"": parse "* The control-plane node functional-756000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-756000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-756000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-756000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0408 04:28:00.966445    8254 out.go:291] Setting OutFile to fd 1 ...
I0408 04:28:00.966628    8254 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 04:28:00.966632    8254 out.go:304] Setting ErrFile to fd 2...
I0408 04:28:00.966634    8254 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 04:28:00.966772    8254 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
I0408 04:28:00.966994    8254 mustload.go:65] Loading cluster: functional-756000
I0408 04:28:00.967215    8254 config.go:182] Loaded profile config "functional-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 04:28:00.972983    8254 out.go:177] * The control-plane node functional-756000 host is not running: state=Stopped
I0408 04:28:00.985047    8254 out.go:177]   To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
stdout: * The control-plane node functional-756000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-756000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-756000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 8255: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-756000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-756000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-756000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-756000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-756000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-756000": client config: context "functional-756000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (111.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-756000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-756000 get svc nginx-svc: exit status 1 (72.982583ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-756000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-756000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (111.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 image load --daemon gcr.io/google-containers/addon-resizer:functional-756000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-756000 image load --daemon gcr.io/google-containers/addon-resizer:functional-756000 --alsologtostderr: (1.412156083s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-756000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 image load --daemon gcr.io/google-containers/addon-resizer:functional-756000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-756000 image load --daemon gcr.io/google-containers/addon-resizer:functional-756000 --alsologtostderr: (1.308080417s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-756000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.336638792s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-756000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 image load --daemon gcr.io/google-containers/addon-resizer:functional-756000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-756000 image load --daemon gcr.io/google-containers/addon-resizer:functional-756000 --alsologtostderr: (1.173237125s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-756000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 image save gcr.io/google-containers/addon-resizer:functional-756000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-756000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.036150792s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 14 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (23.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (23.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-013000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-013000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (10.029006541s)

                                                
                                                
-- stdout --
	* [ha-013000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-013000" primary control-plane node in "ha-013000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-013000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:30:42.039670    8553 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:30:42.039786    8553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:30:42.039789    8553 out.go:304] Setting ErrFile to fd 2...
	I0408 04:30:42.039791    8553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:30:42.039926    8553 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:30:42.040976    8553 out.go:298] Setting JSON to false
	I0408 04:30:42.057293    8553 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5411,"bootTime":1712570431,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:30:42.057376    8553 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:30:42.062460    8553 out.go:177] * [ha-013000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:30:42.070450    8553 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:30:42.070513    8553 notify.go:220] Checking for updates...
	I0408 04:30:42.074374    8553 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:30:42.077337    8553 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:30:42.080552    8553 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:30:42.083422    8553 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:30:42.086420    8553 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:30:42.089524    8553 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:30:42.093364    8553 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:30:42.100425    8553 start.go:297] selected driver: qemu2
	I0408 04:30:42.100432    8553 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:30:42.100438    8553 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:30:42.102687    8553 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:30:42.105389    8553 out.go:177] * Automatically selected the socket_vmnet network
	I0408 04:30:42.108493    8553 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:30:42.108544    8553 cni.go:84] Creating CNI manager for ""
	I0408 04:30:42.108549    8553 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0408 04:30:42.108552    8553 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0408 04:30:42.108586    8553 start.go:340] cluster config:
	{Name:ha-013000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:30:42.112920    8553 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:30:42.120323    8553 out.go:177] * Starting "ha-013000" primary control-plane node in "ha-013000" cluster
	I0408 04:30:42.124428    8553 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:30:42.124446    8553 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:30:42.124457    8553 cache.go:56] Caching tarball of preloaded images
	I0408 04:30:42.124513    8553 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:30:42.124519    8553 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:30:42.124732    8553 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/ha-013000/config.json ...
	I0408 04:30:42.124747    8553 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/ha-013000/config.json: {Name:mkb2326feb1e81a7395c27efcfe1a4c69de4a2e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:30:42.124965    8553 start.go:360] acquireMachinesLock for ha-013000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:30:42.124994    8553 start.go:364] duration metric: took 23.958µs to acquireMachinesLock for "ha-013000"
	I0408 04:30:42.125006    8553 start.go:93] Provisioning new machine with config: &{Name:ha-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.29.3 ClusterName:ha-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:30:42.125032    8553 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:30:42.132387    8553 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 04:30:42.148763    8553 start.go:159] libmachine.API.Create for "ha-013000" (driver="qemu2")
	I0408 04:30:42.148788    8553 client.go:168] LocalClient.Create starting
	I0408 04:30:42.148851    8553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:30:42.148883    8553 main.go:141] libmachine: Decoding PEM data...
	I0408 04:30:42.148890    8553 main.go:141] libmachine: Parsing certificate...
	I0408 04:30:42.148930    8553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:30:42.148951    8553 main.go:141] libmachine: Decoding PEM data...
	I0408 04:30:42.148958    8553 main.go:141] libmachine: Parsing certificate...
	I0408 04:30:42.149294    8553 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:30:42.317386    8553 main.go:141] libmachine: Creating SSH key...
	I0408 04:30:42.504749    8553 main.go:141] libmachine: Creating Disk image...
	I0408 04:30:42.504755    8553 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:30:42.504971    8553 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/disk.qcow2
	I0408 04:30:42.517707    8553 main.go:141] libmachine: STDOUT: 
	I0408 04:30:42.517734    8553 main.go:141] libmachine: STDERR: 
	I0408 04:30:42.517797    8553 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/disk.qcow2 +20000M
	I0408 04:30:42.528641    8553 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:30:42.528656    8553 main.go:141] libmachine: STDERR: 
	I0408 04:30:42.528679    8553 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/disk.qcow2
	I0408 04:30:42.528685    8553 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:30:42.529005    8553 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:e8:86:aa:56:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/disk.qcow2
	I0408 04:30:42.531486    8553 main.go:141] libmachine: STDOUT: 
	I0408 04:30:42.531514    8553 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:30:42.531531    8553 client.go:171] duration metric: took 382.741458ms to LocalClient.Create
	I0408 04:30:44.533714    8553 start.go:128] duration metric: took 2.408683875s to createHost
	I0408 04:30:44.533790    8553 start.go:83] releasing machines lock for "ha-013000", held for 2.4088205s
	W0408 04:30:44.533871    8553 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:30:44.543932    8553 out.go:177] * Deleting "ha-013000" in qemu2 ...
	W0408 04:30:44.575563    8553 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:30:44.575589    8553 start.go:728] Will try again in 5 seconds ...
	I0408 04:30:49.577700    8553 start.go:360] acquireMachinesLock for ha-013000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:30:49.578203    8553 start.go:364] duration metric: took 372.792µs to acquireMachinesLock for "ha-013000"
	I0408 04:30:49.578376    8553 start.go:93] Provisioning new machine with config: &{Name:ha-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.29.3 ClusterName:ha-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:30:49.578737    8553 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:30:49.593609    8553 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 04:30:49.643598    8553 start.go:159] libmachine.API.Create for "ha-013000" (driver="qemu2")
	I0408 04:30:49.643648    8553 client.go:168] LocalClient.Create starting
	I0408 04:30:49.643758    8553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:30:49.643827    8553 main.go:141] libmachine: Decoding PEM data...
	I0408 04:30:49.643848    8553 main.go:141] libmachine: Parsing certificate...
	I0408 04:30:49.643924    8553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:30:49.643965    8553 main.go:141] libmachine: Decoding PEM data...
	I0408 04:30:49.643987    8553 main.go:141] libmachine: Parsing certificate...
	I0408 04:30:49.644534    8553 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:30:49.803562    8553 main.go:141] libmachine: Creating SSH key...
	I0408 04:30:49.959671    8553 main.go:141] libmachine: Creating Disk image...
	I0408 04:30:49.959678    8553 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:30:49.959893    8553 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/disk.qcow2
	I0408 04:30:49.976386    8553 main.go:141] libmachine: STDOUT: 
	I0408 04:30:49.976404    8553 main.go:141] libmachine: STDERR: 
	I0408 04:30:49.976458    8553 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/disk.qcow2 +20000M
	I0408 04:30:49.987468    8553 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:30:49.987481    8553 main.go:141] libmachine: STDERR: 
	I0408 04:30:49.987500    8553 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/disk.qcow2
	I0408 04:30:49.987503    8553 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:30:49.987543    8553 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:0a:44:2d:71:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/disk.qcow2
	I0408 04:30:49.989230    8553 main.go:141] libmachine: STDOUT: 
	I0408 04:30:49.989247    8553 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:30:49.989257    8553 client.go:171] duration metric: took 345.609084ms to LocalClient.Create
	I0408 04:30:51.991458    8553 start.go:128] duration metric: took 2.412701042s to createHost
	I0408 04:30:51.991538    8553 start.go:83] releasing machines lock for "ha-013000", held for 2.413328584s
	W0408 04:30:51.992049    8553 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:30:52.003780    8553 out.go:177] 
	W0408 04:30:52.010916    8553 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:30:52.010956    8553 out.go:239] * 
	* 
	W0408 04:30:52.013479    8553 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:30:52.022730    8553 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-013000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000: exit status 7 (69.184125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (109.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-013000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-013000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (62.15975ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-013000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-013000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-013000 -- rollout status deployment/busybox: exit status 1 (59.482167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-013000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (59.306542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-013000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (77.191833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-013000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.72125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-013000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.6895ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-013000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.631709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-013000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.498875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-013000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.387958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-013000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.17175ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-013000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.054625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-013000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.095ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-013000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.625167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-013000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (59.303292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-013000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-013000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-013000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.94275ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-013000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-013000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-013000 -- exec  -- nslookup kubernetes.default: exit status 1 (59.156125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-013000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-013000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-013000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (59.412292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-013000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000: exit status 7 (32.279625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (109.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-013000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.642583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-013000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000: exit status 7 (32.218834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-013000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-013000 -v=7 --alsologtostderr: exit status 83 (42.563041ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-013000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-013000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:32:41.869304    8652 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:32:41.869728    8652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:32:41.869732    8652 out.go:304] Setting ErrFile to fd 2...
	I0408 04:32:41.869734    8652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:32:41.869880    8652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:32:41.870100    8652 mustload.go:65] Loading cluster: ha-013000
	I0408 04:32:41.870283    8652 config.go:182] Loaded profile config "ha-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:32:41.874436    8652 out.go:177] * The control-plane node ha-013000 host is not running: state=Stopped
	I0408 04:32:41.877450    8652 out.go:177]   To start a cluster, run: "minikube start -p ha-013000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-013000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000: exit status 7 (31.364166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-013000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-013000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.318541ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-013000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-013000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-013000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000: exit status 7 (32.403833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-013000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-013000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-013000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-013000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-013000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-013000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-013000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-013000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000: exit status 7 (32.065875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-013000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-013000 status --output json -v=7 --alsologtostderr: exit status 7 (31.9565ms)

                                                
                                                
-- stdout --
	{"Name":"ha-013000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:32:42.108501    8665 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:32:42.108657    8665 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:32:42.108663    8665 out.go:304] Setting ErrFile to fd 2...
	I0408 04:32:42.108666    8665 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:32:42.108817    8665 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:32:42.108932    8665 out.go:298] Setting JSON to true
	I0408 04:32:42.108943    8665 mustload.go:65] Loading cluster: ha-013000
	I0408 04:32:42.108995    8665 notify.go:220] Checking for updates...
	I0408 04:32:42.109116    8665 config.go:182] Loaded profile config "ha-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:32:42.109121    8665 status.go:255] checking status of ha-013000 ...
	I0408 04:32:42.109330    8665 status.go:330] ha-013000 host status = "Stopped" (err=<nil>)
	I0408 04:32:42.109333    8665 status.go:343] host is not running, skipping remaining checks
	I0408 04:32:42.109336    8665 status.go:257] ha-013000 status: &{Name:ha-013000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-013000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000: exit status 7 (32.062625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-013000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-013000 node stop m02 -v=7 --alsologtostderr: exit status 85 (51.699417ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:32:42.173790    8669 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:32:42.174018    8669 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:32:42.174021    8669 out.go:304] Setting ErrFile to fd 2...
	I0408 04:32:42.174023    8669 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:32:42.174154    8669 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:32:42.174432    8669 mustload.go:65] Loading cluster: ha-013000
	I0408 04:32:42.174659    8669 config.go:182] Loaded profile config "ha-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:32:42.178836    8669 out.go:177] 
	W0408 04:32:42.182860    8669 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0408 04:32:42.182865    8669 out.go:239] * 
	* 
	W0408 04:32:42.185318    8669 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:32:42.189834    8669 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-013000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr: exit status 7 (32.2805ms)

                                                
                                                
-- stdout --
	ha-013000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:32:42.225335    8671 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:32:42.225499    8671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:32:42.225502    8671 out.go:304] Setting ErrFile to fd 2...
	I0408 04:32:42.225505    8671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:32:42.225614    8671 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:32:42.225729    8671 out.go:298] Setting JSON to false
	I0408 04:32:42.225738    8671 mustload.go:65] Loading cluster: ha-013000
	I0408 04:32:42.225805    8671 notify.go:220] Checking for updates...
	I0408 04:32:42.225939    8671 config.go:182] Loaded profile config "ha-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:32:42.225945    8671 status.go:255] checking status of ha-013000 ...
	I0408 04:32:42.226163    8671 status.go:330] ha-013000 host status = "Stopped" (err=<nil>)
	I0408 04:32:42.226167    8671 status.go:343] host is not running, skipping remaining checks
	I0408 04:32:42.226169    8671 status.go:257] ha-013000 status: &{Name:ha-013000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr": ha-013000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr": ha-013000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr": ha-013000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr": ha-013000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000: exit status 7 (32.006125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-013000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-013000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-013000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-013000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000: exit status 7 (31.983334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-013000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-013000 node start m02 -v=7 --alsologtostderr: exit status 85 (47.500208ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:32:42.394135    8681 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:32:42.394386    8681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:32:42.394392    8681 out.go:304] Setting ErrFile to fd 2...
	I0408 04:32:42.394395    8681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:32:42.394496    8681 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:32:42.394706    8681 mustload.go:65] Loading cluster: ha-013000
	I0408 04:32:42.394904    8681 config.go:182] Loaded profile config "ha-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:32:42.399530    8681 out.go:177] 
	W0408 04:32:42.402469    8681 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0408 04:32:42.402474    8681 out.go:239] * 
	* 
	W0408 04:32:42.404430    8681 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:32:42.407413    8681 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0408 04:32:42.394135    8681 out.go:291] Setting OutFile to fd 1 ...
I0408 04:32:42.394386    8681 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 04:32:42.394392    8681 out.go:304] Setting ErrFile to fd 2...
I0408 04:32:42.394395    8681 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 04:32:42.394496    8681 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
I0408 04:32:42.394706    8681 mustload.go:65] Loading cluster: ha-013000
I0408 04:32:42.394904    8681 config.go:182] Loaded profile config "ha-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 04:32:42.399530    8681 out.go:177] 
W0408 04:32:42.402469    8681 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0408 04:32:42.402474    8681 out.go:239] * 
* 
W0408 04:32:42.404430    8681 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0408 04:32:42.407413    8681 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-013000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr: exit status 7 (32.460791ms)

                                                
                                                
-- stdout --
	ha-013000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:32:42.442206    8683 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:32:42.442382    8683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:32:42.442386    8683 out.go:304] Setting ErrFile to fd 2...
	I0408 04:32:42.442388    8683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:32:42.442543    8683 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:32:42.442671    8683 out.go:298] Setting JSON to false
	I0408 04:32:42.442682    8683 mustload.go:65] Loading cluster: ha-013000
	I0408 04:32:42.442746    8683 notify.go:220] Checking for updates...
	I0408 04:32:42.442879    8683 config.go:182] Loaded profile config "ha-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:32:42.442886    8683 status.go:255] checking status of ha-013000 ...
	I0408 04:32:42.443095    8683 status.go:330] ha-013000 host status = "Stopped" (err=<nil>)
	I0408 04:32:42.443099    8683 status.go:343] host is not running, skipping remaining checks
	I0408 04:32:42.443102    8683 status.go:257] ha-013000 status: &{Name:ha-013000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr: exit status 7 (76.402375ms)

                                                
                                                
-- stdout --
	ha-013000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:32:43.052069    8685 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:32:43.052278    8685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:32:43.052285    8685 out.go:304] Setting ErrFile to fd 2...
	I0408 04:32:43.052289    8685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:32:43.052463    8685 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:32:43.052603    8685 out.go:298] Setting JSON to false
	I0408 04:32:43.052615    8685 mustload.go:65] Loading cluster: ha-013000
	I0408 04:32:43.052644    8685 notify.go:220] Checking for updates...
	I0408 04:32:43.052852    8685 config.go:182] Loaded profile config "ha-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:32:43.052859    8685 status.go:255] checking status of ha-013000 ...
	I0408 04:32:43.053122    8685 status.go:330] ha-013000 host status = "Stopped" (err=<nil>)
	I0408 04:32:43.053127    8685 status.go:343] host is not running, skipping remaining checks
	I0408 04:32:43.053130    8685 status.go:257] ha-013000 status: &{Name:ha-013000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr: exit status 7 (76.893667ms)

                                                
                                                
-- stdout --
	ha-013000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:32:44.822697    8687 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:32:44.822873    8687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:32:44.822876    8687 out.go:304] Setting ErrFile to fd 2...
	I0408 04:32:44.822879    8687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:32:44.823021    8687 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:32:44.823171    8687 out.go:298] Setting JSON to false
	I0408 04:32:44.823185    8687 mustload.go:65] Loading cluster: ha-013000
	I0408 04:32:44.823225    8687 notify.go:220] Checking for updates...
	I0408 04:32:44.823410    8687 config.go:182] Loaded profile config "ha-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:32:44.823417    8687 status.go:255] checking status of ha-013000 ...
	I0408 04:32:44.823661    8687 status.go:330] ha-013000 host status = "Stopped" (err=<nil>)
	I0408 04:32:44.823666    8687 status.go:343] host is not running, skipping remaining checks
	I0408 04:32:44.823669    8687 status.go:257] ha-013000 status: &{Name:ha-013000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr: exit status 7 (76.570917ms)

                                                
                                                
-- stdout --
	ha-013000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:32:46.314018    8689 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:32:46.314242    8689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:32:46.314247    8689 out.go:304] Setting ErrFile to fd 2...
	I0408 04:32:46.314250    8689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:32:46.314397    8689 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:32:46.314598    8689 out.go:298] Setting JSON to false
	I0408 04:32:46.314616    8689 mustload.go:65] Loading cluster: ha-013000
	I0408 04:32:46.314653    8689 notify.go:220] Checking for updates...
	I0408 04:32:46.314859    8689 config.go:182] Loaded profile config "ha-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:32:46.314867    8689 status.go:255] checking status of ha-013000 ...
	I0408 04:32:46.315143    8689 status.go:330] ha-013000 host status = "Stopped" (err=<nil>)
	I0408 04:32:46.315148    8689 status.go:343] host is not running, skipping remaining checks
	I0408 04:32:46.315151    8689 status.go:257] ha-013000 status: &{Name:ha-013000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr: exit status 7 (75.101917ms)

                                                
                                                
-- stdout --
	ha-013000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:32:49.334271    8691 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:32:49.334463    8691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:32:49.334468    8691 out.go:304] Setting ErrFile to fd 2...
	I0408 04:32:49.334471    8691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:32:49.334641    8691 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:32:49.334811    8691 out.go:298] Setting JSON to false
	I0408 04:32:49.334824    8691 mustload.go:65] Loading cluster: ha-013000
	I0408 04:32:49.334863    8691 notify.go:220] Checking for updates...
	I0408 04:32:49.335084    8691 config.go:182] Loaded profile config "ha-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:32:49.335093    8691 status.go:255] checking status of ha-013000 ...
	I0408 04:32:49.335329    8691 status.go:330] ha-013000 host status = "Stopped" (err=<nil>)
	I0408 04:32:49.335333    8691 status.go:343] host is not running, skipping remaining checks
	I0408 04:32:49.335336    8691 status.go:257] ha-013000 status: &{Name:ha-013000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr: exit status 7 (75.464417ms)

                                                
                                                
-- stdout --
	ha-013000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:32:56.826077    8693 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:32:56.826261    8693 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:32:56.826268    8693 out.go:304] Setting ErrFile to fd 2...
	I0408 04:32:56.826272    8693 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:32:56.826430    8693 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:32:56.826591    8693 out.go:298] Setting JSON to false
	I0408 04:32:56.826605    8693 mustload.go:65] Loading cluster: ha-013000
	I0408 04:32:56.826640    8693 notify.go:220] Checking for updates...
	I0408 04:32:56.826845    8693 config.go:182] Loaded profile config "ha-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:32:56.826852    8693 status.go:255] checking status of ha-013000 ...
	I0408 04:32:56.827100    8693 status.go:330] ha-013000 host status = "Stopped" (err=<nil>)
	I0408 04:32:56.827105    8693 status.go:343] host is not running, skipping remaining checks
	I0408 04:32:56.827107    8693 status.go:257] ha-013000 status: &{Name:ha-013000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr: exit status 7 (78.24825ms)

                                                
                                                
-- stdout --
	ha-013000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:33:04.072328    8695 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:33:04.072472    8695 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:33:04.072476    8695 out.go:304] Setting ErrFile to fd 2...
	I0408 04:33:04.072479    8695 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:33:04.072644    8695 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:33:04.072800    8695 out.go:298] Setting JSON to false
	I0408 04:33:04.072815    8695 mustload.go:65] Loading cluster: ha-013000
	I0408 04:33:04.072858    8695 notify.go:220] Checking for updates...
	I0408 04:33:04.073090    8695 config.go:182] Loaded profile config "ha-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:33:04.073098    8695 status.go:255] checking status of ha-013000 ...
	I0408 04:33:04.073361    8695 status.go:330] ha-013000 host status = "Stopped" (err=<nil>)
	I0408 04:33:04.073366    8695 status.go:343] host is not running, skipping remaining checks
	I0408 04:33:04.073369    8695 status.go:257] ha-013000 status: &{Name:ha-013000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr: exit status 7 (74.101458ms)

                                                
                                                
-- stdout --
	ha-013000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:33:11.917722    8697 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:33:11.917920    8697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:33:11.917924    8697 out.go:304] Setting ErrFile to fd 2...
	I0408 04:33:11.917928    8697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:33:11.918097    8697 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:33:11.918248    8697 out.go:298] Setting JSON to false
	I0408 04:33:11.918266    8697 mustload.go:65] Loading cluster: ha-013000
	I0408 04:33:11.918308    8697 notify.go:220] Checking for updates...
	I0408 04:33:11.918533    8697 config.go:182] Loaded profile config "ha-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:33:11.918539    8697 status.go:255] checking status of ha-013000 ...
	I0408 04:33:11.918810    8697 status.go:330] ha-013000 host status = "Stopped" (err=<nil>)
	I0408 04:33:11.918814    8697 status.go:343] host is not running, skipping remaining checks
	I0408 04:33:11.918817    8697 status.go:257] ha-013000 status: &{Name:ha-013000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr: exit status 7 (76.1225ms)

                                                
                                                
-- stdout --
	ha-013000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:33:31.011056    8705 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:33:31.011276    8705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:33:31.011280    8705 out.go:304] Setting ErrFile to fd 2...
	I0408 04:33:31.011284    8705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:33:31.011467    8705 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:33:31.011641    8705 out.go:298] Setting JSON to false
	I0408 04:33:31.011653    8705 mustload.go:65] Loading cluster: ha-013000
	I0408 04:33:31.011689    8705 notify.go:220] Checking for updates...
	I0408 04:33:31.011940    8705 config.go:182] Loaded profile config "ha-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:33:31.011947    8705 status.go:255] checking status of ha-013000 ...
	I0408 04:33:31.012210    8705 status.go:330] ha-013000 host status = "Stopped" (err=<nil>)
	I0408 04:33:31.012215    8705 status.go:343] host is not running, skipping remaining checks
	I0408 04:33:31.012218    8705 status.go:257] ha-013000 status: &{Name:ha-013000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000: exit status 7 (34.169458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (48.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-013000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-013000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-013000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-013000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-013000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-013000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-013000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-013000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000: exit status 7 (32.336458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-013000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-013000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-013000 -v=7 --alsologtostderr: (3.129485625s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-013000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-013000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.216986792s)

                                                
                                                
-- stdout --
	* [ha-013000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-013000" primary control-plane node in "ha-013000" cluster
	* Restarting existing qemu2 VM for "ha-013000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-013000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:33:34.379626    8735 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:33:34.379826    8735 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:33:34.379830    8735 out.go:304] Setting ErrFile to fd 2...
	I0408 04:33:34.379833    8735 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:33:34.379993    8735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:33:34.381208    8735 out.go:298] Setting JSON to false
	I0408 04:33:34.399947    8735 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5583,"bootTime":1712570431,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:33:34.400007    8735 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:33:34.405254    8735 out.go:177] * [ha-013000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:33:34.411204    8735 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:33:34.411254    8735 notify.go:220] Checking for updates...
	I0408 04:33:34.415148    8735 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:33:34.418177    8735 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:33:34.421070    8735 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:33:34.424143    8735 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:33:34.427158    8735 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:33:34.428985    8735 config.go:182] Loaded profile config "ha-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:33:34.429046    8735 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:33:34.433183    8735 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 04:33:34.439950    8735 start.go:297] selected driver: qemu2
	I0408 04:33:34.439956    8735 start.go:901] validating driver "qemu2" against &{Name:ha-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.29.3 ClusterName:ha-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:33:34.440017    8735 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:33:34.442427    8735 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:33:34.442470    8735 cni.go:84] Creating CNI manager for ""
	I0408 04:33:34.442475    8735 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0408 04:33:34.442520    8735 start.go:340] cluster config:
	{Name:ha-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-013000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:33:34.447105    8735 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:33:34.454172    8735 out.go:177] * Starting "ha-013000" primary control-plane node in "ha-013000" cluster
	I0408 04:33:34.458078    8735 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:33:34.458097    8735 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:33:34.458106    8735 cache.go:56] Caching tarball of preloaded images
	I0408 04:33:34.458168    8735 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:33:34.458174    8735 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:33:34.458236    8735 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/ha-013000/config.json ...
	I0408 04:33:34.458730    8735 start.go:360] acquireMachinesLock for ha-013000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:33:34.458768    8735 start.go:364] duration metric: took 29.916µs to acquireMachinesLock for "ha-013000"
	I0408 04:33:34.458781    8735 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:33:34.458787    8735 fix.go:54] fixHost starting: 
	I0408 04:33:34.458912    8735 fix.go:112] recreateIfNeeded on ha-013000: state=Stopped err=<nil>
	W0408 04:33:34.458920    8735 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:33:34.465089    8735 out.go:177] * Restarting existing qemu2 VM for "ha-013000" ...
	I0408 04:33:34.469172    8735 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:0a:44:2d:71:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/disk.qcow2
	I0408 04:33:34.471615    8735 main.go:141] libmachine: STDOUT: 
	I0408 04:33:34.471636    8735 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:33:34.471674    8735 fix.go:56] duration metric: took 12.886125ms for fixHost
	I0408 04:33:34.471678    8735 start.go:83] releasing machines lock for "ha-013000", held for 12.905375ms
	W0408 04:33:34.471685    8735 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:33:34.471720    8735 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:33:34.471725    8735 start.go:728] Will try again in 5 seconds ...
	I0408 04:33:39.473789    8735 start.go:360] acquireMachinesLock for ha-013000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:33:39.474120    8735 start.go:364] duration metric: took 256µs to acquireMachinesLock for "ha-013000"
	I0408 04:33:39.474254    8735 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:33:39.474272    8735 fix.go:54] fixHost starting: 
	I0408 04:33:39.474922    8735 fix.go:112] recreateIfNeeded on ha-013000: state=Stopped err=<nil>
	W0408 04:33:39.474950    8735 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:33:39.480324    8735 out.go:177] * Restarting existing qemu2 VM for "ha-013000" ...
	I0408 04:33:39.485540    8735 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:0a:44:2d:71:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/disk.qcow2
	I0408 04:33:39.494600    8735 main.go:141] libmachine: STDOUT: 
	I0408 04:33:39.494680    8735 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:33:39.494739    8735 fix.go:56] duration metric: took 20.469ms for fixHost
	I0408 04:33:39.494758    8735 start.go:83] releasing machines lock for "ha-013000", held for 20.615375ms
	W0408 04:33:39.495018    8735 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-013000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-013000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:33:39.502251    8735 out.go:177] 
	W0408 04:33:39.506289    8735 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:33:39.506362    8735 out.go:239] * 
	* 
	W0408 04:33:39.509037    8735 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:33:39.517279    8735 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-013000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-013000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000: exit status 7 (34.132541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-013000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-013000 node delete m03 -v=7 --alsologtostderr: exit status 83 (43.332334ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-013000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-013000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:33:39.666392    8747 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:33:39.666765    8747 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:33:39.666768    8747 out.go:304] Setting ErrFile to fd 2...
	I0408 04:33:39.666771    8747 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:33:39.666913    8747 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:33:39.667136    8747 mustload.go:65] Loading cluster: ha-013000
	I0408 04:33:39.667326    8747 config.go:182] Loaded profile config "ha-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:33:39.672242    8747 out.go:177] * The control-plane node ha-013000 host is not running: state=Stopped
	I0408 04:33:39.675129    8747 out.go:177]   To start a cluster, run: "minikube start -p ha-013000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-013000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr: exit status 7 (32.003625ms)

                                                
                                                
-- stdout --
	ha-013000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:33:39.708835    8749 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:33:39.709192    8749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:33:39.709197    8749 out.go:304] Setting ErrFile to fd 2...
	I0408 04:33:39.709200    8749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:33:39.709389    8749 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:33:39.709552    8749 out.go:298] Setting JSON to false
	I0408 04:33:39.709569    8749 mustload.go:65] Loading cluster: ha-013000
	I0408 04:33:39.709802    8749 notify.go:220] Checking for updates...
	I0408 04:33:39.710080    8749 config.go:182] Loaded profile config "ha-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:33:39.710088    8749 status.go:255] checking status of ha-013000 ...
	I0408 04:33:39.710284    8749 status.go:330] ha-013000 host status = "Stopped" (err=<nil>)
	I0408 04:33:39.710289    8749 status.go:343] host is not running, skipping remaining checks
	I0408 04:33:39.710291    8749 status.go:257] ha-013000 status: &{Name:ha-013000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000: exit status 7 (31.9725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-013000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-013000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-013000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-013000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000: exit status 7 (32.338333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-013000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-013000 stop -v=7 --alsologtostderr: (3.485126041s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr: exit status 7 (67.722208ms)

                                                
                                                
-- stdout --
	ha-013000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:33:43.398435    8780 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:33:43.398646    8780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:33:43.398650    8780 out.go:304] Setting ErrFile to fd 2...
	I0408 04:33:43.398653    8780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:33:43.398843    8780 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:33:43.399006    8780 out.go:298] Setting JSON to false
	I0408 04:33:43.399019    8780 mustload.go:65] Loading cluster: ha-013000
	I0408 04:33:43.399061    8780 notify.go:220] Checking for updates...
	I0408 04:33:43.399279    8780 config.go:182] Loaded profile config "ha-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:33:43.399286    8780 status.go:255] checking status of ha-013000 ...
	I0408 04:33:43.399576    8780 status.go:330] ha-013000 host status = "Stopped" (err=<nil>)
	I0408 04:33:43.399581    8780 status.go:343] host is not running, skipping remaining checks
	I0408 04:33:43.399589    8780 status.go:257] ha-013000 status: &{Name:ha-013000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr": ha-013000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr": ha-013000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-013000 status -v=7 --alsologtostderr": ha-013000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000: exit status 7 (33.983833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-013000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-013000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.190524334s)

                                                
                                                
-- stdout --
	* [ha-013000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-013000" primary control-plane node in "ha-013000" cluster
	* Restarting existing qemu2 VM for "ha-013000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-013000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:33:43.465030    8784 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:33:43.465161    8784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:33:43.465164    8784 out.go:304] Setting ErrFile to fd 2...
	I0408 04:33:43.465167    8784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:33:43.465314    8784 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:33:43.466344    8784 out.go:298] Setting JSON to false
	I0408 04:33:43.482468    8784 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5592,"bootTime":1712570431,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:33:43.482531    8784 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:33:43.486669    8784 out.go:177] * [ha-013000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:33:43.494669    8784 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:33:43.498614    8784 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:33:43.494738    8784 notify.go:220] Checking for updates...
	I0408 04:33:43.504639    8784 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:33:43.507647    8784 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:33:43.510605    8784 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:33:43.513671    8784 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:33:43.516807    8784 config.go:182] Loaded profile config "ha-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:33:43.517055    8784 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:33:43.521622    8784 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 04:33:43.528674    8784 start.go:297] selected driver: qemu2
	I0408 04:33:43.528682    8784 start.go:901] validating driver "qemu2" against &{Name:ha-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.29.3 ClusterName:ha-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:33:43.528757    8784 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:33:43.531050    8784 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:33:43.531093    8784 cni.go:84] Creating CNI manager for ""
	I0408 04:33:43.531098    8784 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0408 04:33:43.531141    8784 start.go:340] cluster config:
	{Name:ha-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-013000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:33:43.535473    8784 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:33:43.542411    8784 out.go:177] * Starting "ha-013000" primary control-plane node in "ha-013000" cluster
	I0408 04:33:43.546584    8784 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:33:43.546602    8784 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:33:43.546611    8784 cache.go:56] Caching tarball of preloaded images
	I0408 04:33:43.546660    8784 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:33:43.546666    8784 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:33:43.546724    8784 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/ha-013000/config.json ...
	I0408 04:33:43.547205    8784 start.go:360] acquireMachinesLock for ha-013000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:33:43.547230    8784 start.go:364] duration metric: took 19.25µs to acquireMachinesLock for "ha-013000"
	I0408 04:33:43.547238    8784 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:33:43.547244    8784 fix.go:54] fixHost starting: 
	I0408 04:33:43.547352    8784 fix.go:112] recreateIfNeeded on ha-013000: state=Stopped err=<nil>
	W0408 04:33:43.547360    8784 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:33:43.555624    8784 out.go:177] * Restarting existing qemu2 VM for "ha-013000" ...
	I0408 04:33:43.559682    8784 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:0a:44:2d:71:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/disk.qcow2
	I0408 04:33:43.561660    8784 main.go:141] libmachine: STDOUT: 
	I0408 04:33:43.561681    8784 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:33:43.561713    8784 fix.go:56] duration metric: took 14.468625ms for fixHost
	I0408 04:33:43.561719    8784 start.go:83] releasing machines lock for "ha-013000", held for 14.485292ms
	W0408 04:33:43.561723    8784 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:33:43.561747    8784 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:33:43.561751    8784 start.go:728] Will try again in 5 seconds ...
	I0408 04:33:48.563137    8784 start.go:360] acquireMachinesLock for ha-013000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:33:48.563559    8784 start.go:364] duration metric: took 315.542µs to acquireMachinesLock for "ha-013000"
	I0408 04:33:48.563689    8784 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:33:48.563709    8784 fix.go:54] fixHost starting: 
	I0408 04:33:48.564433    8784 fix.go:112] recreateIfNeeded on ha-013000: state=Stopped err=<nil>
	W0408 04:33:48.564464    8784 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:33:48.573870    8784 out.go:177] * Restarting existing qemu2 VM for "ha-013000" ...
	I0408 04:33:48.578861    8784 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:0a:44:2d:71:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/ha-013000/disk.qcow2
	I0408 04:33:48.588161    8784 main.go:141] libmachine: STDOUT: 
	I0408 04:33:48.588229    8784 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:33:48.588291    8784 fix.go:56] duration metric: took 24.582125ms for fixHost
	I0408 04:33:48.588317    8784 start.go:83] releasing machines lock for "ha-013000", held for 24.733708ms
	W0408 04:33:48.588532    8784 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-013000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-013000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:33:48.596857    8784 out.go:177] 
	W0408 04:33:48.600878    8784 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:33:48.600952    8784 out.go:239] * 
	* 
	W0408 04:33:48.603869    8784 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:33:48.610815    8784 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-013000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000: exit status 7 (69.657125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-013000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-013000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-013000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-013000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000: exit status 7 (32.208167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-013000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-013000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.819417ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-013000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-013000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:33:48.833537    8800 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:33:48.833701    8800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:33:48.833704    8800 out.go:304] Setting ErrFile to fd 2...
	I0408 04:33:48.833706    8800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:33:48.833839    8800 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:33:48.834067    8800 mustload.go:65] Loading cluster: ha-013000
	I0408 04:33:48.834245    8800 config.go:182] Loaded profile config "ha-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:33:48.837779    8800 out.go:177] * The control-plane node ha-013000 host is not running: state=Stopped
	I0408 04:33:48.841716    8800 out.go:177]   To start a cluster, run: "minikube start -p ha-013000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-013000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000: exit status 7 (32.160417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-013000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-013000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-013000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-013000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-013000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-013000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-013000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-013000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-013000 -n ha-013000: exit status 7 (32.068125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.11s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-853000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-853000 --driver=qemu2 : exit status 80 (9.85235375s)

                                                
                                                
-- stdout --
	* [image-853000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-853000" primary control-plane node in "image-853000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-853000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-853000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-853000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-853000 -n image-853000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-853000 -n image-853000: exit status 7 (69.923666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-853000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.93s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-703000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-703000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.924797667s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e2fca1e5-46b8-46e1-a80a-e42c36e0f6bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-703000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5ddd1403-c70f-4ced-9907-ef1bd58a831d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18588"}}
	{"specversion":"1.0","id":"84aca041-b97b-45d3-8447-13ccb750c5b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig"}}
	{"specversion":"1.0","id":"fd6ad803-5b5b-4744-b4a0-d004dadb497a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"3e2d301a-dd4a-4be2-85e1-b821aadeea1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cd0c914b-3c6b-408d-ac0f-a03356809091","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube"}}
	{"specversion":"1.0","id":"a84d0d81-3ec7-465a-a86f-430555c9d4de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"742195ee-3eae-4aec-9a9b-aa77853b4d26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"dccab093-0718-4a8a-9116-7948dbd65cd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"eb5d9788-5115-43e0-ada9-9b3c9c76ef82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-703000\" primary control-plane node in \"json-output-703000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"37d60701-ceae-41de-b5c6-5ce87eb67255","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"6b220fba-21ac-43be-b3a3-220eb2188149","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-703000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"5b154ac3-7377-4273-9bee-9e20c58d131d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"fd428070-099c-4410-8965-7d014f9a338d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"bfb534d5-21bd-4c39-91f0-c95b8e827309","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-703000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"6bb35494-46ca-43ef-a77d-d4bcefeab5bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"6f1c67d7-9b33-4ede-a0f9-30c25e1ab148","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-703000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.93s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-703000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-703000 --output=json --user=testUser: exit status 83 (80.423291ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5e1e9aa4-f0f2-4b05-84b1-6f59aff6f6cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-703000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"d2bd5b29-9504-4319-841d-069cdc558e52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-703000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-703000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-703000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-703000 --output=json --user=testUser: exit status 83 (47.46975ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-703000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-703000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-703000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-703000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.4s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-260000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-260000 --driver=qemu2 : exit status 80 (9.952632542s)

                                                
                                                
-- stdout --
	* [first-260000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-260000" primary control-plane node in "first-260000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-260000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-260000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-260000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-04-08 04:34:22.87037 -0700 PDT m=+489.493502292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-262000 -n second-262000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-262000 -n second-262000: exit status 85 (85.335708ms)

                                                
                                                
-- stdout --
	* Profile "second-262000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-262000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-262000" host is not running, skipping log retrieval (state="* Profile \"second-262000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-262000\"")
helpers_test.go:175: Cleaning up "second-262000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-262000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-04-08 04:34:23.185645 -0700 PDT m=+489.808782376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-260000 -n first-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-260000 -n first-260000: exit status 7 (32.124167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-260000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-260000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-260000
--- FAIL: TestMinikubeProfile (10.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-429000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-429000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.0227385s)

                                                
                                                
-- stdout --
	* [mount-start-1-429000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-429000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-429000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-429000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-429000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-429000 -n mount-start-1-429000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-429000 -n mount-start-1-429000: exit status 7 (72.157625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-429000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.10s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-464000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-464000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.909103125s)

                                                
                                                
-- stdout --
	* [multinode-464000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-464000" primary control-plane node in "multinode-464000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-464000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:34:33.777988    8962 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:34:33.778119    8962 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:34:33.778122    8962 out.go:304] Setting ErrFile to fd 2...
	I0408 04:34:33.778124    8962 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:34:33.778248    8962 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:34:33.779340    8962 out.go:298] Setting JSON to false
	I0408 04:34:33.795323    8962 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5642,"bootTime":1712570431,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:34:33.795387    8962 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:34:33.801516    8962 out.go:177] * [multinode-464000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:34:33.809505    8962 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:34:33.809579    8962 notify.go:220] Checking for updates...
	I0408 04:34:33.814488    8962 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:34:33.817493    8962 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:34:33.825434    8962 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:34:33.828490    8962 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:34:33.831457    8962 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:34:33.834585    8962 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:34:33.838396    8962 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:34:33.845432    8962 start.go:297] selected driver: qemu2
	I0408 04:34:33.845439    8962 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:34:33.845446    8962 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:34:33.847791    8962 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:34:33.850455    8962 out.go:177] * Automatically selected the socket_vmnet network
	I0408 04:34:33.853454    8962 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:34:33.853490    8962 cni.go:84] Creating CNI manager for ""
	I0408 04:34:33.853494    8962 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0408 04:34:33.853498    8962 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0408 04:34:33.853537    8962 start.go:340] cluster config:
	{Name:multinode-464000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-464000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:34:33.858116    8962 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:34:33.865272    8962 out.go:177] * Starting "multinode-464000" primary control-plane node in "multinode-464000" cluster
	I0408 04:34:33.869427    8962 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:34:33.869453    8962 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:34:33.869462    8962 cache.go:56] Caching tarball of preloaded images
	I0408 04:34:33.869517    8962 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:34:33.869523    8962 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:34:33.869744    8962 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/multinode-464000/config.json ...
	I0408 04:34:33.869756    8962 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/multinode-464000/config.json: {Name:mk63d4e3324b373f730eaef215d242dcc1e11045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:34:33.869969    8962 start.go:360] acquireMachinesLock for multinode-464000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:34:33.869999    8962 start.go:364] duration metric: took 24.583µs to acquireMachinesLock for "multinode-464000"
	I0408 04:34:33.870009    8962 start.go:93] Provisioning new machine with config: &{Name:multinode-464000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:multinode-464000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:34:33.870037    8962 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:34:33.878403    8962 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 04:34:33.895601    8962 start.go:159] libmachine.API.Create for "multinode-464000" (driver="qemu2")
	I0408 04:34:33.895632    8962 client.go:168] LocalClient.Create starting
	I0408 04:34:33.895692    8962 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:34:33.895721    8962 main.go:141] libmachine: Decoding PEM data...
	I0408 04:34:33.895729    8962 main.go:141] libmachine: Parsing certificate...
	I0408 04:34:33.895765    8962 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:34:33.895787    8962 main.go:141] libmachine: Decoding PEM data...
	I0408 04:34:33.895793    8962 main.go:141] libmachine: Parsing certificate...
	I0408 04:34:33.896136    8962 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:34:34.043931    8962 main.go:141] libmachine: Creating SSH key...
	I0408 04:34:34.236995    8962 main.go:141] libmachine: Creating Disk image...
	I0408 04:34:34.237004    8962 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:34:34.237198    8962 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/disk.qcow2
	I0408 04:34:34.250215    8962 main.go:141] libmachine: STDOUT: 
	I0408 04:34:34.250232    8962 main.go:141] libmachine: STDERR: 
	I0408 04:34:34.250283    8962 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/disk.qcow2 +20000M
	I0408 04:34:34.260908    8962 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:34:34.260923    8962 main.go:141] libmachine: STDERR: 
	I0408 04:34:34.260942    8962 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/disk.qcow2
	I0408 04:34:34.260948    8962 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:34:34.260976    8962 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:55:94:a6:dd:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/disk.qcow2
	I0408 04:34:34.262660    8962 main.go:141] libmachine: STDOUT: 
	I0408 04:34:34.262675    8962 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:34:34.262697    8962 client.go:171] duration metric: took 367.062333ms to LocalClient.Create
	I0408 04:34:36.264863    8962 start.go:128] duration metric: took 2.394833s to createHost
	I0408 04:34:36.264928    8962 start.go:83] releasing machines lock for "multinode-464000", held for 2.394953125s
	W0408 04:34:36.265030    8962 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:34:36.272360    8962 out.go:177] * Deleting "multinode-464000" in qemu2 ...
	W0408 04:34:36.302619    8962 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:34:36.302645    8962 start.go:728] Will try again in 5 seconds ...
	I0408 04:34:41.304781    8962 start.go:360] acquireMachinesLock for multinode-464000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:34:41.305253    8962 start.go:364] duration metric: took 314.334µs to acquireMachinesLock for "multinode-464000"
	I0408 04:34:41.305383    8962 start.go:93] Provisioning new machine with config: &{Name:multinode-464000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:multinode-464000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:34:41.305738    8962 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:34:41.316419    8962 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 04:34:41.364479    8962 start.go:159] libmachine.API.Create for "multinode-464000" (driver="qemu2")
	I0408 04:34:41.364528    8962 client.go:168] LocalClient.Create starting
	I0408 04:34:41.364632    8962 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:34:41.364705    8962 main.go:141] libmachine: Decoding PEM data...
	I0408 04:34:41.364723    8962 main.go:141] libmachine: Parsing certificate...
	I0408 04:34:41.364780    8962 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:34:41.364822    8962 main.go:141] libmachine: Decoding PEM data...
	I0408 04:34:41.364840    8962 main.go:141] libmachine: Parsing certificate...
	I0408 04:34:41.365490    8962 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:34:41.522537    8962 main.go:141] libmachine: Creating SSH key...
	I0408 04:34:41.580159    8962 main.go:141] libmachine: Creating Disk image...
	I0408 04:34:41.580164    8962 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:34:41.580321    8962 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/disk.qcow2
	I0408 04:34:41.592769    8962 main.go:141] libmachine: STDOUT: 
	I0408 04:34:41.592789    8962 main.go:141] libmachine: STDERR: 
	I0408 04:34:41.592833    8962 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/disk.qcow2 +20000M
	I0408 04:34:41.603401    8962 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:34:41.603422    8962 main.go:141] libmachine: STDERR: 
	I0408 04:34:41.603447    8962 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/disk.qcow2
	I0408 04:34:41.603451    8962 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:34:41.603482    8962 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:e4:c6:59:9f:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/disk.qcow2
	I0408 04:34:41.605233    8962 main.go:141] libmachine: STDOUT: 
	I0408 04:34:41.605253    8962 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:34:41.605268    8962 client.go:171] duration metric: took 240.739292ms to LocalClient.Create
	I0408 04:34:43.607426    8962 start.go:128] duration metric: took 2.301654792s to createHost
	I0408 04:34:43.607479    8962 start.go:83] releasing machines lock for "multinode-464000", held for 2.302234458s
	W0408 04:34:43.607894    8962 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-464000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-464000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:34:43.623616    8962 out.go:177] 
	W0408 04:34:43.627646    8962 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:34:43.627690    8962 out.go:239] * 
	* 
	W0408 04:34:43.630647    8962 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:34:43.642494    8962 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-464000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000: exit status 7 (70.418542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-464000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (81.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-464000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-464000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (60.72125ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-464000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-464000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-464000 -- rollout status deployment/busybox: exit status 1 (59.007583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-464000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.730917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-464000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.050666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-464000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.71675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-464000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.825042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-464000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.774167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-464000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.016667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-464000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.189292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-464000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.785375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-464000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.06925ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-464000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.87875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-464000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (59.260209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-464000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-464000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-464000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.687042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-464000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-464000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-464000 -- exec  -- nslookup kubernetes.default: exit status 1 (59.347875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-464000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-464000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-464000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.954583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-464000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000: exit status 7 (32.204ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-464000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (81.69s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-464000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.171791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-464000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000: exit status 7 (32.512958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-464000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-464000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-464000 -v 3 --alsologtostderr: exit status 83 (43.772291ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-464000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-464000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:36:05.540735    9053 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:36:05.540883    9053 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:05.540888    9053 out.go:304] Setting ErrFile to fd 2...
	I0408 04:36:05.540890    9053 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:05.541020    9053 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:36:05.541259    9053 mustload.go:65] Loading cluster: multinode-464000
	I0408 04:36:05.541430    9053 config.go:182] Loaded profile config "multinode-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:36:05.545341    9053 out.go:177] * The control-plane node multinode-464000 host is not running: state=Stopped
	I0408 04:36:05.549527    9053 out.go:177]   To start a cluster, run: "minikube start -p multinode-464000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-464000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000: exit status 7 (31.3095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-464000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-464000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-464000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.291458ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-464000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-464000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-464000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000: exit status 7 (31.9895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-464000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-464000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-464000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-464000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"multinode-464000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000: exit status 7 (32.262792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-464000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-464000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-464000 status --output json --alsologtostderr: exit status 7 (32.054209ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-464000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:36:05.778087    9066 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:36:05.778228    9066 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:05.778231    9066 out.go:304] Setting ErrFile to fd 2...
	I0408 04:36:05.778235    9066 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:05.778361    9066 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:36:05.778497    9066 out.go:298] Setting JSON to true
	I0408 04:36:05.778510    9066 mustload.go:65] Loading cluster: multinode-464000
	I0408 04:36:05.778571    9066 notify.go:220] Checking for updates...
	I0408 04:36:05.778728    9066 config.go:182] Loaded profile config "multinode-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:36:05.778738    9066 status.go:255] checking status of multinode-464000 ...
	I0408 04:36:05.778941    9066 status.go:330] multinode-464000 host status = "Stopped" (err=<nil>)
	I0408 04:36:05.778945    9066 status.go:343] host is not running, skipping remaining checks
	I0408 04:36:05.778947    9066 status.go:257] multinode-464000 status: &{Name:multinode-464000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-464000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000: exit status 7 (32.314208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-464000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-464000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-464000 node stop m03: exit status 85 (51.027084ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-464000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-464000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-464000 status: exit status 7 (32.371584ms)

                                                
                                                
-- stdout --
	multinode-464000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-464000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-464000 status --alsologtostderr: exit status 7 (32.66625ms)

                                                
                                                
-- stdout --
	multinode-464000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:36:05.927201    9074 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:36:05.927375    9074 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:05.927378    9074 out.go:304] Setting ErrFile to fd 2...
	I0408 04:36:05.927381    9074 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:05.927517    9074 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:36:05.927638    9074 out.go:298] Setting JSON to false
	I0408 04:36:05.927653    9074 mustload.go:65] Loading cluster: multinode-464000
	I0408 04:36:05.927714    9074 notify.go:220] Checking for updates...
	I0408 04:36:05.927862    9074 config.go:182] Loaded profile config "multinode-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:36:05.927869    9074 status.go:255] checking status of multinode-464000 ...
	I0408 04:36:05.928073    9074 status.go:330] multinode-464000 host status = "Stopped" (err=<nil>)
	I0408 04:36:05.928077    9074 status.go:343] host is not running, skipping remaining checks
	I0408 04:36:05.928079    9074 status.go:257] multinode-464000 status: &{Name:multinode-464000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-464000 status --alsologtostderr": multinode-464000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000: exit status 7 (32.660417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-464000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (46.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-464000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-464000 node start m03 -v=7 --alsologtostderr: exit status 85 (48.93075ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:36:05.992539    9078 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:36:05.993025    9078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:05.993032    9078 out.go:304] Setting ErrFile to fd 2...
	I0408 04:36:05.993035    9078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:05.993186    9078 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:36:05.993397    9078 mustload.go:65] Loading cluster: multinode-464000
	I0408 04:36:05.993571    9078 config.go:182] Loaded profile config "multinode-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:36:05.997996    9078 out.go:177] 
	W0408 04:36:06.000943    9078 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0408 04:36:06.000947    9078 out.go:239] * 
	* 
	W0408 04:36:06.002934    9078 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:36:06.006904    9078 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0408 04:36:05.992539    9078 out.go:291] Setting OutFile to fd 1 ...
I0408 04:36:05.993025    9078 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 04:36:05.993032    9078 out.go:304] Setting ErrFile to fd 2...
I0408 04:36:05.993035    9078 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 04:36:05.993186    9078 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
I0408 04:36:05.993397    9078 mustload.go:65] Loading cluster: multinode-464000
I0408 04:36:05.993571    9078 config.go:182] Loaded profile config "multinode-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 04:36:05.997996    9078 out.go:177] 
W0408 04:36:06.000943    9078 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0408 04:36:06.000947    9078 out.go:239] * 
* 
W0408 04:36:06.002934    9078 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0408 04:36:06.006904    9078 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-464000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-464000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-464000 status -v=7 --alsologtostderr: exit status 7 (31.824083ms)

                                                
                                                
-- stdout --
	multinode-464000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:36:06.041080    9080 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:36:06.041234    9080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:06.041238    9080 out.go:304] Setting ErrFile to fd 2...
	I0408 04:36:06.041240    9080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:06.041375    9080 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:36:06.041496    9080 out.go:298] Setting JSON to false
	I0408 04:36:06.041507    9080 mustload.go:65] Loading cluster: multinode-464000
	I0408 04:36:06.041563    9080 notify.go:220] Checking for updates...
	I0408 04:36:06.041711    9080 config.go:182] Loaded profile config "multinode-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:36:06.041720    9080 status.go:255] checking status of multinode-464000 ...
	I0408 04:36:06.041907    9080 status.go:330] multinode-464000 host status = "Stopped" (err=<nil>)
	I0408 04:36:06.041911    9080 status.go:343] host is not running, skipping remaining checks
	I0408 04:36:06.041914    9080 status.go:257] multinode-464000 status: &{Name:multinode-464000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-464000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-464000 status -v=7 --alsologtostderr: exit status 7 (75.811666ms)

                                                
                                                
-- stdout --
	multinode-464000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:36:07.203383    9082 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:36:07.203579    9082 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:07.203583    9082 out.go:304] Setting ErrFile to fd 2...
	I0408 04:36:07.203586    9082 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:07.203745    9082 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:36:07.203898    9082 out.go:298] Setting JSON to false
	I0408 04:36:07.203912    9082 mustload.go:65] Loading cluster: multinode-464000
	I0408 04:36:07.203951    9082 notify.go:220] Checking for updates...
	I0408 04:36:07.204178    9082 config.go:182] Loaded profile config "multinode-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:36:07.204185    9082 status.go:255] checking status of multinode-464000 ...
	I0408 04:36:07.204462    9082 status.go:330] multinode-464000 host status = "Stopped" (err=<nil>)
	I0408 04:36:07.204467    9082 status.go:343] host is not running, skipping remaining checks
	I0408 04:36:07.204470    9082 status.go:257] multinode-464000 status: &{Name:multinode-464000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-464000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-464000 status -v=7 --alsologtostderr: exit status 7 (77.130292ms)

                                                
                                                
-- stdout --
	multinode-464000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:36:09.379542    9084 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:36:09.379712    9084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:09.379717    9084 out.go:304] Setting ErrFile to fd 2...
	I0408 04:36:09.379720    9084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:09.379885    9084 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:36:09.380042    9084 out.go:298] Setting JSON to false
	I0408 04:36:09.380055    9084 mustload.go:65] Loading cluster: multinode-464000
	I0408 04:36:09.380079    9084 notify.go:220] Checking for updates...
	I0408 04:36:09.380309    9084 config.go:182] Loaded profile config "multinode-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:36:09.380316    9084 status.go:255] checking status of multinode-464000 ...
	I0408 04:36:09.380591    9084 status.go:330] multinode-464000 host status = "Stopped" (err=<nil>)
	I0408 04:36:09.380596    9084 status.go:343] host is not running, skipping remaining checks
	I0408 04:36:09.380599    9084 status.go:257] multinode-464000 status: &{Name:multinode-464000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-464000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-464000 status -v=7 --alsologtostderr: exit status 7 (77.260333ms)

                                                
                                                
-- stdout --
	multinode-464000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:36:11.622709    9087 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:36:11.622904    9087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:11.622908    9087 out.go:304] Setting ErrFile to fd 2...
	I0408 04:36:11.622911    9087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:11.623085    9087 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:36:11.623235    9087 out.go:298] Setting JSON to false
	I0408 04:36:11.623250    9087 mustload.go:65] Loading cluster: multinode-464000
	I0408 04:36:11.623286    9087 notify.go:220] Checking for updates...
	I0408 04:36:11.623509    9087 config.go:182] Loaded profile config "multinode-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:36:11.623517    9087 status.go:255] checking status of multinode-464000 ...
	I0408 04:36:11.623788    9087 status.go:330] multinode-464000 host status = "Stopped" (err=<nil>)
	I0408 04:36:11.623793    9087 status.go:343] host is not running, skipping remaining checks
	I0408 04:36:11.623796    9087 status.go:257] multinode-464000 status: &{Name:multinode-464000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-464000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-464000 status -v=7 --alsologtostderr: exit status 7 (76.709042ms)

                                                
                                                
-- stdout --
	multinode-464000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:36:15.760730    9089 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:36:15.760895    9089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:15.760900    9089 out.go:304] Setting ErrFile to fd 2...
	I0408 04:36:15.760903    9089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:15.761066    9089 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:36:15.761233    9089 out.go:298] Setting JSON to false
	I0408 04:36:15.761251    9089 mustload.go:65] Loading cluster: multinode-464000
	I0408 04:36:15.761296    9089 notify.go:220] Checking for updates...
	I0408 04:36:15.761496    9089 config.go:182] Loaded profile config "multinode-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:36:15.761503    9089 status.go:255] checking status of multinode-464000 ...
	I0408 04:36:15.761764    9089 status.go:330] multinode-464000 host status = "Stopped" (err=<nil>)
	I0408 04:36:15.761769    9089 status.go:343] host is not running, skipping remaining checks
	I0408 04:36:15.761772    9089 status.go:257] multinode-464000 status: &{Name:multinode-464000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-464000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-464000 status -v=7 --alsologtostderr: exit status 7 (74.1125ms)

                                                
                                                
-- stdout --
	multinode-464000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:36:21.262151    9091 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:36:21.262345    9091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:21.262348    9091 out.go:304] Setting ErrFile to fd 2...
	I0408 04:36:21.262351    9091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:21.262502    9091 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:36:21.262658    9091 out.go:298] Setting JSON to false
	I0408 04:36:21.262672    9091 mustload.go:65] Loading cluster: multinode-464000
	I0408 04:36:21.262703    9091 notify.go:220] Checking for updates...
	I0408 04:36:21.262913    9091 config.go:182] Loaded profile config "multinode-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:36:21.262921    9091 status.go:255] checking status of multinode-464000 ...
	I0408 04:36:21.263186    9091 status.go:330] multinode-464000 host status = "Stopped" (err=<nil>)
	I0408 04:36:21.263190    9091 status.go:343] host is not running, skipping remaining checks
	I0408 04:36:21.263193    9091 status.go:257] multinode-464000 status: &{Name:multinode-464000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-464000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-464000 status -v=7 --alsologtostderr: exit status 7 (74.03725ms)

                                                
                                                
-- stdout --
	multinode-464000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:36:26.901928    9093 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:36:26.902090    9093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:26.902094    9093 out.go:304] Setting ErrFile to fd 2...
	I0408 04:36:26.902097    9093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:26.902236    9093 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:36:26.902378    9093 out.go:298] Setting JSON to false
	I0408 04:36:26.902393    9093 mustload.go:65] Loading cluster: multinode-464000
	I0408 04:36:26.902429    9093 notify.go:220] Checking for updates...
	I0408 04:36:26.902623    9093 config.go:182] Loaded profile config "multinode-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:36:26.902630    9093 status.go:255] checking status of multinode-464000 ...
	I0408 04:36:26.902871    9093 status.go:330] multinode-464000 host status = "Stopped" (err=<nil>)
	I0408 04:36:26.902875    9093 status.go:343] host is not running, skipping remaining checks
	I0408 04:36:26.902878    9093 status.go:257] multinode-464000 status: &{Name:multinode-464000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-464000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-464000 status -v=7 --alsologtostderr: exit status 7 (76.435334ms)

                                                
                                                
-- stdout --
	multinode-464000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:36:33.452734    9095 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:36:33.452924    9095 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:33.452928    9095 out.go:304] Setting ErrFile to fd 2...
	I0408 04:36:33.452931    9095 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:33.453117    9095 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:36:33.453275    9095 out.go:298] Setting JSON to false
	I0408 04:36:33.453289    9095 mustload.go:65] Loading cluster: multinode-464000
	I0408 04:36:33.453323    9095 notify.go:220] Checking for updates...
	I0408 04:36:33.453559    9095 config.go:182] Loaded profile config "multinode-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:36:33.453567    9095 status.go:255] checking status of multinode-464000 ...
	I0408 04:36:33.453834    9095 status.go:330] multinode-464000 host status = "Stopped" (err=<nil>)
	I0408 04:36:33.453839    9095 status.go:343] host is not running, skipping remaining checks
	I0408 04:36:33.453841    9095 status.go:257] multinode-464000 status: &{Name:multinode-464000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-464000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-464000 status -v=7 --alsologtostderr: exit status 7 (76.831291ms)

                                                
                                                
-- stdout --
	multinode-464000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:36:52.752623    9109 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:36:52.752798    9109 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:52.752802    9109 out.go:304] Setting ErrFile to fd 2...
	I0408 04:36:52.752805    9109 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:52.752971    9109 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:36:52.753131    9109 out.go:298] Setting JSON to false
	I0408 04:36:52.753144    9109 mustload.go:65] Loading cluster: multinode-464000
	I0408 04:36:52.753175    9109 notify.go:220] Checking for updates...
	I0408 04:36:52.753402    9109 config.go:182] Loaded profile config "multinode-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:36:52.753409    9109 status.go:255] checking status of multinode-464000 ...
	I0408 04:36:52.753678    9109 status.go:330] multinode-464000 host status = "Stopped" (err=<nil>)
	I0408 04:36:52.753682    9109 status.go:343] host is not running, skipping remaining checks
	I0408 04:36:52.753685    9109 status.go:257] multinode-464000 status: &{Name:multinode-464000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-464000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000: exit status 7 (34.402042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-464000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (46.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-464000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-464000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-464000: (3.309822666s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-464000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-464000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.237386375s)

                                                
                                                
-- stdout --
	* [multinode-464000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-464000" primary control-plane node in "multinode-464000" cluster
	* Restarting existing qemu2 VM for "multinode-464000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-464000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:36:56.199675    9133 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:36:56.199852    9133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:56.199856    9133 out.go:304] Setting ErrFile to fd 2...
	I0408 04:36:56.199859    9133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:36:56.200026    9133 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:36:56.201183    9133 out.go:298] Setting JSON to false
	I0408 04:36:56.220533    9133 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5785,"bootTime":1712570431,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:36:56.220603    9133 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:36:56.225241    9133 out.go:177] * [multinode-464000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:36:56.232088    9133 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:36:56.232126    9133 notify.go:220] Checking for updates...
	I0408 04:36:56.240005    9133 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:36:56.248050    9133 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:36:56.251126    9133 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:36:56.254184    9133 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:36:56.262114    9133 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:36:56.266445    9133 config.go:182] Loaded profile config "multinode-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:36:56.266506    9133 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:36:56.271084    9133 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 04:36:56.278152    9133 start.go:297] selected driver: qemu2
	I0408 04:36:56.278158    9133 start.go:901] validating driver "qemu2" against &{Name:multinode-464000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:multinode-464000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:36:56.278206    9133 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:36:56.280721    9133 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:36:56.280777    9133 cni.go:84] Creating CNI manager for ""
	I0408 04:36:56.280782    9133 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0408 04:36:56.280828    9133 start.go:340] cluster config:
	{Name:multinode-464000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-464000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:36:56.285488    9133 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:36:56.293162    9133 out.go:177] * Starting "multinode-464000" primary control-plane node in "multinode-464000" cluster
	I0408 04:36:56.296115    9133 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:36:56.296131    9133 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:36:56.296138    9133 cache.go:56] Caching tarball of preloaded images
	I0408 04:36:56.296198    9133 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:36:56.296204    9133 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:36:56.296266    9133 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/multinode-464000/config.json ...
	I0408 04:36:56.296776    9133 start.go:360] acquireMachinesLock for multinode-464000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:36:56.296814    9133 start.go:364] duration metric: took 30.542µs to acquireMachinesLock for "multinode-464000"
	I0408 04:36:56.296824    9133 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:36:56.296828    9133 fix.go:54] fixHost starting: 
	I0408 04:36:56.296969    9133 fix.go:112] recreateIfNeeded on multinode-464000: state=Stopped err=<nil>
	W0408 04:36:56.296979    9133 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:36:56.304126    9133 out.go:177] * Restarting existing qemu2 VM for "multinode-464000" ...
	I0408 04:36:56.308184    9133 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:e4:c6:59:9f:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/disk.qcow2
	I0408 04:36:56.310661    9133 main.go:141] libmachine: STDOUT: 
	I0408 04:36:56.310689    9133 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:36:56.310725    9133 fix.go:56] duration metric: took 13.894875ms for fixHost
	I0408 04:36:56.310730    9133 start.go:83] releasing machines lock for "multinode-464000", held for 13.911375ms
	W0408 04:36:56.310739    9133 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:36:56.310770    9133 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:36:56.310776    9133 start.go:728] Will try again in 5 seconds ...
	I0408 04:37:01.311460    9133 start.go:360] acquireMachinesLock for multinode-464000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:37:01.311852    9133 start.go:364] duration metric: took 277.625µs to acquireMachinesLock for "multinode-464000"
	I0408 04:37:01.312003    9133 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:37:01.312025    9133 fix.go:54] fixHost starting: 
	I0408 04:37:01.312779    9133 fix.go:112] recreateIfNeeded on multinode-464000: state=Stopped err=<nil>
	W0408 04:37:01.312805    9133 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:37:01.318176    9133 out.go:177] * Restarting existing qemu2 VM for "multinode-464000" ...
	I0408 04:37:01.322453    9133 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:e4:c6:59:9f:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/disk.qcow2
	I0408 04:37:01.332131    9133 main.go:141] libmachine: STDOUT: 
	I0408 04:37:01.332197    9133 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:37:01.332289    9133 fix.go:56] duration metric: took 20.267ms for fixHost
	I0408 04:37:01.332306    9133 start.go:83] releasing machines lock for "multinode-464000", held for 20.430292ms
	W0408 04:37:01.332479    9133 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-464000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-464000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:37:01.340161    9133 out.go:177] 
	W0408 04:37:01.344297    9133 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:37:01.344338    9133 out.go:239] * 
	* 
	W0408 04:37:01.347071    9133 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:37:01.355194    9133 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-464000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-464000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000: exit status 7 (33.905792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-464000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-464000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-464000 node delete m03: exit status 83 (42.492666ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-464000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-464000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-464000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-464000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-464000 status --alsologtostderr: exit status 7 (32.146375ms)

                                                
                                                
-- stdout --
	multinode-464000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:37:01.549291    9149 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:37:01.549416    9149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:37:01.549419    9149 out.go:304] Setting ErrFile to fd 2...
	I0408 04:37:01.549421    9149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:37:01.549546    9149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:37:01.549665    9149 out.go:298] Setting JSON to false
	I0408 04:37:01.549679    9149 mustload.go:65] Loading cluster: multinode-464000
	I0408 04:37:01.549734    9149 notify.go:220] Checking for updates...
	I0408 04:37:01.549873    9149 config.go:182] Loaded profile config "multinode-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:37:01.549879    9149 status.go:255] checking status of multinode-464000 ...
	I0408 04:37:01.550067    9149 status.go:330] multinode-464000 host status = "Stopped" (err=<nil>)
	I0408 04:37:01.550070    9149 status.go:343] host is not running, skipping remaining checks
	I0408 04:37:01.550073    9149 status.go:257] multinode-464000 status: &{Name:multinode-464000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-464000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000: exit status 7 (32.411541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-464000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-464000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-464000 stop: (3.339703459s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-464000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-464000 status: exit status 7 (66.365041ms)

                                                
                                                
-- stdout --
	multinode-464000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-464000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-464000 status --alsologtostderr: exit status 7 (33.901917ms)

                                                
                                                
-- stdout --
	multinode-464000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:37:05.022153    9175 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:37:05.022298    9175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:37:05.022301    9175 out.go:304] Setting ErrFile to fd 2...
	I0408 04:37:05.022303    9175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:37:05.022449    9175 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:37:05.022573    9175 out.go:298] Setting JSON to false
	I0408 04:37:05.022584    9175 mustload.go:65] Loading cluster: multinode-464000
	I0408 04:37:05.022645    9175 notify.go:220] Checking for updates...
	I0408 04:37:05.022807    9175 config.go:182] Loaded profile config "multinode-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:37:05.022813    9175 status.go:255] checking status of multinode-464000 ...
	I0408 04:37:05.023029    9175 status.go:330] multinode-464000 host status = "Stopped" (err=<nil>)
	I0408 04:37:05.023032    9175 status.go:343] host is not running, skipping remaining checks
	I0408 04:37:05.023034    9175 status.go:257] multinode-464000 status: &{Name:multinode-464000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-464000 status --alsologtostderr": multinode-464000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-464000 status --alsologtostderr": multinode-464000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000: exit status 7 (32.097625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-464000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-464000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-464000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.187658208s)

                                                
                                                
-- stdout --
	* [multinode-464000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-464000" primary control-plane node in "multinode-464000" cluster
	* Restarting existing qemu2 VM for "multinode-464000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-464000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:37:05.086308    9179 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:37:05.086453    9179 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:37:05.086457    9179 out.go:304] Setting ErrFile to fd 2...
	I0408 04:37:05.086459    9179 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:37:05.086588    9179 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:37:05.087564    9179 out.go:298] Setting JSON to false
	I0408 04:37:05.103774    9179 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5794,"bootTime":1712570431,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:37:05.103840    9179 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:37:05.107736    9179 out.go:177] * [multinode-464000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:37:05.115676    9179 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:37:05.119677    9179 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:37:05.115727    9179 notify.go:220] Checking for updates...
	I0408 04:37:05.122748    9179 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:37:05.125711    9179 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:37:05.128641    9179 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:37:05.131769    9179 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:37:05.134882    9179 config.go:182] Loaded profile config "multinode-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:37:05.135159    9179 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:37:05.139704    9179 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 04:37:05.146656    9179 start.go:297] selected driver: qemu2
	I0408 04:37:05.146663    9179 start.go:901] validating driver "qemu2" against &{Name:multinode-464000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:multinode-464000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:37:05.146722    9179 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:37:05.149156    9179 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:37:05.149206    9179 cni.go:84] Creating CNI manager for ""
	I0408 04:37:05.149212    9179 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0408 04:37:05.149271    9179 start.go:340] cluster config:
	{Name:multinode-464000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-464000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:37:05.153810    9179 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:37:05.160546    9179 out.go:177] * Starting "multinode-464000" primary control-plane node in "multinode-464000" cluster
	I0408 04:37:05.164687    9179 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:37:05.164703    9179 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:37:05.164713    9179 cache.go:56] Caching tarball of preloaded images
	I0408 04:37:05.164764    9179 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:37:05.164769    9179 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:37:05.164826    9179 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/multinode-464000/config.json ...
	I0408 04:37:05.165322    9179 start.go:360] acquireMachinesLock for multinode-464000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:37:05.165346    9179 start.go:364] duration metric: took 18.458µs to acquireMachinesLock for "multinode-464000"
	I0408 04:37:05.165354    9179 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:37:05.165360    9179 fix.go:54] fixHost starting: 
	I0408 04:37:05.165467    9179 fix.go:112] recreateIfNeeded on multinode-464000: state=Stopped err=<nil>
	W0408 04:37:05.165475    9179 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:37:05.172599    9179 out.go:177] * Restarting existing qemu2 VM for "multinode-464000" ...
	I0408 04:37:05.176706    9179 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:e4:c6:59:9f:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/disk.qcow2
	I0408 04:37:05.178760    9179 main.go:141] libmachine: STDOUT: 
	I0408 04:37:05.178784    9179 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:37:05.178811    9179 fix.go:56] duration metric: took 13.451125ms for fixHost
	I0408 04:37:05.178816    9179 start.go:83] releasing machines lock for "multinode-464000", held for 13.4665ms
	W0408 04:37:05.178821    9179 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:37:05.178851    9179 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:37:05.178855    9179 start.go:728] Will try again in 5 seconds ...
	I0408 04:37:10.181026    9179 start.go:360] acquireMachinesLock for multinode-464000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:37:10.181442    9179 start.go:364] duration metric: took 288.417µs to acquireMachinesLock for "multinode-464000"
	I0408 04:37:10.181584    9179 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:37:10.181607    9179 fix.go:54] fixHost starting: 
	I0408 04:37:10.182428    9179 fix.go:112] recreateIfNeeded on multinode-464000: state=Stopped err=<nil>
	W0408 04:37:10.182453    9179 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:37:10.186909    9179 out.go:177] * Restarting existing qemu2 VM for "multinode-464000" ...
	I0408 04:37:10.196075    9179 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:e4:c6:59:9f:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/multinode-464000/disk.qcow2
	I0408 04:37:10.206046    9179 main.go:141] libmachine: STDOUT: 
	I0408 04:37:10.206124    9179 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:37:10.206250    9179 fix.go:56] duration metric: took 24.64375ms for fixHost
	I0408 04:37:10.206281    9179 start.go:83] releasing machines lock for "multinode-464000", held for 24.80875ms
	W0408 04:37:10.206487    9179 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-464000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-464000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:37:10.215810    9179 out.go:177] 
	W0408 04:37:10.218825    9179 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:37:10.218861    9179 out.go:239] * 
	* 
	W0408 04:37:10.221448    9179 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:37:10.229758    9179 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-464000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000: exit status 7 (71.393542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-464000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-464000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-464000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-464000-m01 --driver=qemu2 : exit status 80 (9.9463375s)

                                                
                                                
-- stdout --
	* [multinode-464000-m01] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-464000-m01" primary control-plane node in "multinode-464000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-464000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-464000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-464000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-464000-m02 --driver=qemu2 : exit status 80 (10.009844625s)

                                                
                                                
-- stdout --
	* [multinode-464000-m02] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-464000-m02" primary control-plane node in "multinode-464000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-464000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-464000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-464000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-464000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-464000: exit status 83 (86.383083ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-464000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-464000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-464000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-464000 -n multinode-464000: exit status 7 (32.991333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-464000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.22s)

                                                
                                    
x
+
TestPreload (10.08s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-221000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-221000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.908316s)

                                                
                                                
-- stdout --
	* [test-preload-221000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-221000" primary control-plane node in "test-preload-221000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-221000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:37:30.702122    9242 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:37:30.702245    9242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:37:30.702249    9242 out.go:304] Setting ErrFile to fd 2...
	I0408 04:37:30.702252    9242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:37:30.702383    9242 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:37:30.703501    9242 out.go:298] Setting JSON to false
	I0408 04:37:30.719595    9242 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5819,"bootTime":1712570431,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:37:30.719657    9242 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:37:30.725131    9242 out.go:177] * [test-preload-221000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:37:30.731998    9242 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:37:30.732055    9242 notify.go:220] Checking for updates...
	I0408 04:37:30.737084    9242 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:37:30.740069    9242 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:37:30.743019    9242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:37:30.746019    9242 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:37:30.749050    9242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:37:30.750960    9242 config.go:182] Loaded profile config "multinode-464000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:37:30.751015    9242 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:37:30.755075    9242 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:37:30.761891    9242 start.go:297] selected driver: qemu2
	I0408 04:37:30.761898    9242 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:37:30.761904    9242 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:37:30.764383    9242 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:37:30.767034    9242 out.go:177] * Automatically selected the socket_vmnet network
	I0408 04:37:30.770126    9242 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:37:30.770158    9242 cni.go:84] Creating CNI manager for ""
	I0408 04:37:30.770166    9242 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:37:30.770172    9242 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 04:37:30.770205    9242 start.go:340] cluster config:
	{Name:test-preload-221000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-221000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:37:30.774850    9242 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:37:30.782065    9242 out.go:177] * Starting "test-preload-221000" primary control-plane node in "test-preload-221000" cluster
	I0408 04:37:30.786110    9242 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0408 04:37:30.786192    9242 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/test-preload-221000/config.json ...
	I0408 04:37:30.786215    9242 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/test-preload-221000/config.json: {Name:mk17872826bd7cc35d2c3f7d1c852261a5d2f394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:37:30.786218    9242 cache.go:107] acquiring lock: {Name:mk7d827fbe72994058a0aa0b3623e002ddb04e55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:37:30.786248    9242 cache.go:107] acquiring lock: {Name:mk85c40ae24f15911980a15ebb2fa7600c509316 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:37:30.786246    9242 cache.go:107] acquiring lock: {Name:mkbee5e7d287710096ff2ee8079699574832b03a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:37:30.786218    9242 cache.go:107] acquiring lock: {Name:mk9877ffaea1c1634c0e03efe73e1284d9ba32bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:37:30.786493    9242 cache.go:107] acquiring lock: {Name:mk783387409882c53f7dc41974d3def3353452a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:37:30.786504    9242 cache.go:107] acquiring lock: {Name:mk8a930bcd285c228326845b68cbe268464fb63f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:37:30.786512    9242 cache.go:107] acquiring lock: {Name:mk81d4d903853a1fb285d960e8b75b10d4e3c203 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:37:30.786637    9242 start.go:360] acquireMachinesLock for test-preload-221000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:37:30.786496    9242 cache.go:107] acquiring lock: {Name:mk15269e1bea67309d370f32b75014423fc2b2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:37:30.786637    9242 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0408 04:37:30.786672    9242 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0408 04:37:30.786679    9242 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0408 04:37:30.786701    9242 start.go:364] duration metric: took 54.041µs to acquireMachinesLock for "test-preload-221000"
	I0408 04:37:30.786708    9242 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 04:37:30.786716    9242 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0408 04:37:30.786741    9242 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0408 04:37:30.786716    9242 start.go:93] Provisioning new machine with config: &{Name:test-preload-221000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-221000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:37:30.786787    9242 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:37:30.791094    9242 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 04:37:30.786820    9242 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 04:37:30.786853    9242 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0408 04:37:30.797674    9242 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0408 04:37:30.797879    9242 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 04:37:30.797930    9242 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0408 04:37:30.798118    9242 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0408 04:37:30.798281    9242 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0408 04:37:30.800170    9242 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0408 04:37:30.800234    9242 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0408 04:37:30.800261    9242 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 04:37:30.808323    9242 start.go:159] libmachine.API.Create for "test-preload-221000" (driver="qemu2")
	I0408 04:37:30.808357    9242 client.go:168] LocalClient.Create starting
	I0408 04:37:30.808488    9242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:37:30.808518    9242 main.go:141] libmachine: Decoding PEM data...
	I0408 04:37:30.808528    9242 main.go:141] libmachine: Parsing certificate...
	I0408 04:37:30.808566    9242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:37:30.808588    9242 main.go:141] libmachine: Decoding PEM data...
	I0408 04:37:30.808596    9242 main.go:141] libmachine: Parsing certificate...
	I0408 04:37:30.808894    9242 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:37:30.964637    9242 main.go:141] libmachine: Creating SSH key...
	I0408 04:37:31.151164    9242 main.go:141] libmachine: Creating Disk image...
	I0408 04:37:31.151185    9242 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:37:31.151353    9242 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/test-preload-221000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/test-preload-221000/disk.qcow2
	I0408 04:37:31.163881    9242 main.go:141] libmachine: STDOUT: 
	I0408 04:37:31.163898    9242 main.go:141] libmachine: STDERR: 
	I0408 04:37:31.163960    9242 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/test-preload-221000/disk.qcow2 +20000M
	I0408 04:37:31.175691    9242 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:37:31.175708    9242 main.go:141] libmachine: STDERR: 
	I0408 04:37:31.175720    9242 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/test-preload-221000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/test-preload-221000/disk.qcow2
	I0408 04:37:31.175725    9242 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:37:31.175751    9242 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/test-preload-221000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/test-preload-221000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/test-preload-221000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:72:f1:c0:54:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/test-preload-221000/disk.qcow2
	I0408 04:37:31.177683    9242 main.go:141] libmachine: STDOUT: 
	I0408 04:37:31.177709    9242 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:37:31.177725    9242 client.go:171] duration metric: took 369.367625ms to LocalClient.Create
	I0408 04:37:31.222599    9242 cache.go:162] opening:  /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0408 04:37:31.226006    9242 cache.go:162] opening:  /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0408 04:37:31.256962    9242 cache.go:162] opening:  /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0408 04:37:31.258695    9242 cache.go:162] opening:  /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0408 04:37:31.260586    9242 cache.go:162] opening:  /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0408 04:37:31.286298    9242 cache.go:162] opening:  /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0408 04:37:31.340214    9242 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0408 04:37:31.340245    9242 cache.go:162] opening:  /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0408 04:37:31.380485    9242 cache.go:157] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0408 04:37:31.380503    9242 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 593.996542ms
	I0408 04:37:31.380521    9242 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0408 04:37:31.459355    9242 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0408 04:37:31.459423    9242 cache.go:162] opening:  /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0408 04:37:32.135646    9242 cache.go:157] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0408 04:37:32.135692    9242 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.349485541s
	I0408 04:37:32.135716    9242 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0408 04:37:33.177911    9242 start.go:128] duration metric: took 2.39113925s to createHost
	I0408 04:37:33.177968    9242 start.go:83] releasing machines lock for "test-preload-221000", held for 2.391291584s
	W0408 04:37:33.177995    9242 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:37:33.189750    9242 out.go:177] * Deleting "test-preload-221000" in qemu2 ...
	W0408 04:37:33.208012    9242 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:37:33.208040    9242 start.go:728] Will try again in 5 seconds ...
	I0408 04:37:33.523096    9242 cache.go:157] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0408 04:37:33.523152    9242 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.736709375s
	I0408 04:37:33.523186    9242 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0408 04:37:34.071497    9242 cache.go:157] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0408 04:37:34.071559    9242 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 3.285179375s
	I0408 04:37:34.071605    9242 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0408 04:37:34.284529    9242 cache.go:157] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0408 04:37:34.284580    9242 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.498409417s
	I0408 04:37:34.284604    9242 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0408 04:37:36.612681    9242 cache.go:157] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0408 04:37:36.612734    9242 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.826577666s
	I0408 04:37:36.612765    9242 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0408 04:37:37.201476    9242 cache.go:157] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0408 04:37:37.201523    9242 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.415360083s
	I0408 04:37:37.201552    9242 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0408 04:37:38.208226    9242 start.go:360] acquireMachinesLock for test-preload-221000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:37:38.208656    9242 start.go:364] duration metric: took 347.917µs to acquireMachinesLock for "test-preload-221000"
	I0408 04:37:38.208768    9242 start.go:93] Provisioning new machine with config: &{Name:test-preload-221000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-221000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:37:38.209032    9242 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:37:38.220701    9242 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 04:37:38.270399    9242 start.go:159] libmachine.API.Create for "test-preload-221000" (driver="qemu2")
	I0408 04:37:38.270451    9242 client.go:168] LocalClient.Create starting
	I0408 04:37:38.270558    9242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:37:38.270617    9242 main.go:141] libmachine: Decoding PEM data...
	I0408 04:37:38.270636    9242 main.go:141] libmachine: Parsing certificate...
	I0408 04:37:38.270711    9242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:37:38.270751    9242 main.go:141] libmachine: Decoding PEM data...
	I0408 04:37:38.270764    9242 main.go:141] libmachine: Parsing certificate...
	I0408 04:37:38.271304    9242 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:37:38.428557    9242 main.go:141] libmachine: Creating SSH key...
	I0408 04:37:38.505953    9242 main.go:141] libmachine: Creating Disk image...
	I0408 04:37:38.505959    9242 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:37:38.506153    9242 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/test-preload-221000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/test-preload-221000/disk.qcow2
	I0408 04:37:38.518910    9242 main.go:141] libmachine: STDOUT: 
	I0408 04:37:38.518934    9242 main.go:141] libmachine: STDERR: 
	I0408 04:37:38.519003    9242 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/test-preload-221000/disk.qcow2 +20000M
	I0408 04:37:38.530232    9242 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:37:38.530264    9242 main.go:141] libmachine: STDERR: 
	I0408 04:37:38.530277    9242 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/test-preload-221000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/test-preload-221000/disk.qcow2
	I0408 04:37:38.530281    9242 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:37:38.530328    9242 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/test-preload-221000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/test-preload-221000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/test-preload-221000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:55:42:1d:05:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/test-preload-221000/disk.qcow2
	I0408 04:37:38.532182    9242 main.go:141] libmachine: STDOUT: 
	I0408 04:37:38.532208    9242 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:37:38.532225    9242 client.go:171] duration metric: took 261.770625ms to LocalClient.Create
	I0408 04:37:40.494267    9242 cache.go:157] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0408 04:37:40.494329    9242 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.70797975s
	I0408 04:37:40.494356    9242 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0408 04:37:40.494399    9242 cache.go:87] Successfully saved all images to host disk.
	I0408 04:37:40.534382    9242 start.go:128] duration metric: took 2.325359125s to createHost
	I0408 04:37:40.534426    9242 start.go:83] releasing machines lock for "test-preload-221000", held for 2.325778666s
	W0408 04:37:40.534730    9242 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-221000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-221000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:37:40.544120    9242 out.go:177] 
	W0408 04:37:40.552300    9242 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:37:40.552326    9242 out.go:239] * 
	* 
	W0408 04:37:40.554999    9242 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:37:40.564085    9242 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-221000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-04-08 04:37:40.583256 -0700 PDT m=+687.208799334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-221000 -n test-preload-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-221000 -n test-preload-221000: exit status 7 (68.004917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-221000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-221000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-221000
--- FAIL: TestPreload (10.08s)

                                                
                                    
x
+
TestScheduledStopUnix (10.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-522000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-522000 --memory=2048 --driver=qemu2 : exit status 80 (9.836790375s)

                                                
                                                
-- stdout --
	* [scheduled-stop-522000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-522000" primary control-plane node in "scheduled-stop-522000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-522000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-522000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-522000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-522000" primary control-plane node in "scheduled-stop-522000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-522000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-522000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-04-08 04:37:50.589769 -0700 PDT m=+697.215452709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-522000 -n scheduled-stop-522000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-522000 -n scheduled-stop-522000: exit status 7 (70.455667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-522000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-522000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-522000
--- FAIL: TestScheduledStopUnix (10.02s)

                                                
                                    
x
+
TestSkaffold (12.13s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2470473304 version
skaffold_test.go:63: skaffold version: v2.11.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-513000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-513000 --memory=2600 --driver=qemu2 : exit status 80 (9.901470458s)

                                                
                                                
-- stdout --
	* [skaffold-513000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-513000" primary control-plane node in "skaffold-513000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-513000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-513000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-513000" primary control-plane node in "skaffold-513000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-513000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-04-08 04:38:02.724195 -0700 PDT m=+709.350048584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-513000 -n skaffold-513000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-513000 -n skaffold-513000: exit status 7 (65.510417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-513000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-513000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-513000
--- FAIL: TestSkaffold (12.13s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (602.47s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2857710224 start -p running-upgrade-835000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2857710224 start -p running-upgrade-835000 --memory=2200 --vm-driver=qemu2 : (49.349654709s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-835000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-835000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m39.837629375s)

                                                
                                                
-- stdout --
	* [running-upgrade-835000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-835000" primary control-plane node in "running-upgrade-835000" cluster
	* Updating the running qemu2 "running-upgrade-835000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:39:34.135296    9654 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:39:34.135424    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:39:34.135428    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:39:34.135430    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:39:34.135560    9654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:39:34.136493    9654 out.go:298] Setting JSON to false
	I0408 04:39:34.154356    9654 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5943,"bootTime":1712570431,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:39:34.154419    9654 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:39:34.159580    9654 out.go:177] * [running-upgrade-835000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:39:34.167533    9654 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:39:34.172556    9654 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:39:34.167613    9654 notify.go:220] Checking for updates...
	I0408 04:39:34.174077    9654 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:39:34.177534    9654 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:39:34.180525    9654 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:39:34.183568    9654 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:39:34.186825    9654 config.go:182] Loaded profile config "running-upgrade-835000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 04:39:34.190526    9654 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0408 04:39:34.193516    9654 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:39:34.198464    9654 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 04:39:34.205491    9654 start.go:297] selected driver: qemu2
	I0408 04:39:34.205497    9654 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-835000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51241 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-835000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0408 04:39:34.205557    9654 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:39:34.208401    9654 cni.go:84] Creating CNI manager for ""
	I0408 04:39:34.208418    9654 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:39:34.208444    9654 start.go:340] cluster config:
	{Name:running-upgrade-835000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51241 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-835000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0408 04:39:34.208495    9654 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:39:34.216568    9654 out.go:177] * Starting "running-upgrade-835000" primary control-plane node in "running-upgrade-835000" cluster
	I0408 04:39:34.220394    9654 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0408 04:39:34.220411    9654 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0408 04:39:34.220418    9654 cache.go:56] Caching tarball of preloaded images
	I0408 04:39:34.220468    9654 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:39:34.220474    9654 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0408 04:39:34.220525    9654 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/config.json ...
	I0408 04:39:34.221116    9654 start.go:360] acquireMachinesLock for running-upgrade-835000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:39:34.221151    9654 start.go:364] duration metric: took 28.25µs to acquireMachinesLock for "running-upgrade-835000"
	I0408 04:39:34.221158    9654 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:39:34.221163    9654 fix.go:54] fixHost starting: 
	I0408 04:39:34.221856    9654 fix.go:112] recreateIfNeeded on running-upgrade-835000: state=Running err=<nil>
	W0408 04:39:34.221864    9654 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:39:34.231532    9654 out.go:177] * Updating the running qemu2 "running-upgrade-835000" VM ...
	I0408 04:39:34.235560    9654 machine.go:94] provisionDockerMachine start ...
	I0408 04:39:34.235629    9654 main.go:141] libmachine: Using SSH client type: native
	I0408 04:39:34.235769    9654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101089c80] 0x10108c4e0 <nil>  [] 0s} localhost 51209 <nil> <nil>}
	I0408 04:39:34.235774    9654 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 04:39:34.287377    9654 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-835000
	
	I0408 04:39:34.287391    9654 buildroot.go:166] provisioning hostname "running-upgrade-835000"
	I0408 04:39:34.287432    9654 main.go:141] libmachine: Using SSH client type: native
	I0408 04:39:34.287525    9654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101089c80] 0x10108c4e0 <nil>  [] 0s} localhost 51209 <nil> <nil>}
	I0408 04:39:34.287532    9654 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-835000 && echo "running-upgrade-835000" | sudo tee /etc/hostname
	I0408 04:39:34.340193    9654 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-835000
	
	I0408 04:39:34.340238    9654 main.go:141] libmachine: Using SSH client type: native
	I0408 04:39:34.340327    9654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101089c80] 0x10108c4e0 <nil>  [] 0s} localhost 51209 <nil> <nil>}
	I0408 04:39:34.340335    9654 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-835000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-835000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-835000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 04:39:34.391789    9654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 04:39:34.391799    9654 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18588-7343/.minikube CaCertPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18588-7343/.minikube}
	I0408 04:39:34.391805    9654 buildroot.go:174] setting up certificates
	I0408 04:39:34.391810    9654 provision.go:84] configureAuth start
	I0408 04:39:34.391813    9654 provision.go:143] copyHostCerts
	I0408 04:39:34.391888    9654 exec_runner.go:144] found /Users/jenkins/minikube-integration/18588-7343/.minikube/ca.pem, removing ...
	I0408 04:39:34.391893    9654 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18588-7343/.minikube/ca.pem
	I0408 04:39:34.392020    9654 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18588-7343/.minikube/ca.pem (1078 bytes)
	I0408 04:39:34.392222    9654 exec_runner.go:144] found /Users/jenkins/minikube-integration/18588-7343/.minikube/cert.pem, removing ...
	I0408 04:39:34.392225    9654 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18588-7343/.minikube/cert.pem
	I0408 04:39:34.392271    9654 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18588-7343/.minikube/cert.pem (1123 bytes)
	I0408 04:39:34.392401    9654 exec_runner.go:144] found /Users/jenkins/minikube-integration/18588-7343/.minikube/key.pem, removing ...
	I0408 04:39:34.392404    9654 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18588-7343/.minikube/key.pem
	I0408 04:39:34.392449    9654 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18588-7343/.minikube/key.pem (1679 bytes)
	I0408 04:39:34.392533    9654 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-835000 san=[127.0.0.1 localhost minikube running-upgrade-835000]
	I0408 04:39:34.636478    9654 provision.go:177] copyRemoteCerts
	I0408 04:39:34.636527    9654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 04:39:34.636536    9654 sshutil.go:53] new ssh client: &{IP:localhost Port:51209 SSHKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/running-upgrade-835000/id_rsa Username:docker}
	I0408 04:39:34.665405    9654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0408 04:39:34.672204    9654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0408 04:39:34.678753    9654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 04:39:34.686557    9654 provision.go:87] duration metric: took 294.737ms to configureAuth
	I0408 04:39:34.686569    9654 buildroot.go:189] setting minikube options for container-runtime
	I0408 04:39:34.686705    9654 config.go:182] Loaded profile config "running-upgrade-835000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 04:39:34.686744    9654 main.go:141] libmachine: Using SSH client type: native
	I0408 04:39:34.686832    9654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101089c80] 0x10108c4e0 <nil>  [] 0s} localhost 51209 <nil> <nil>}
	I0408 04:39:34.686836    9654 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0408 04:39:34.741087    9654 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0408 04:39:34.741098    9654 buildroot.go:70] root file system type: tmpfs
	I0408 04:39:34.741146    9654 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0408 04:39:34.741203    9654 main.go:141] libmachine: Using SSH client type: native
	I0408 04:39:34.741309    9654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101089c80] 0x10108c4e0 <nil>  [] 0s} localhost 51209 <nil> <nil>}
	I0408 04:39:34.741341    9654 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0408 04:39:34.792230    9654 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0408 04:39:34.792286    9654 main.go:141] libmachine: Using SSH client type: native
	I0408 04:39:34.792390    9654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101089c80] 0x10108c4e0 <nil>  [] 0s} localhost 51209 <nil> <nil>}
	I0408 04:39:34.792397    9654 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0408 04:39:34.845351    9654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 04:39:34.845362    9654 machine.go:97] duration metric: took 609.803459ms to provisionDockerMachine
	I0408 04:39:34.845368    9654 start.go:293] postStartSetup for "running-upgrade-835000" (driver="qemu2")
	I0408 04:39:34.845374    9654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 04:39:34.845423    9654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 04:39:34.845431    9654 sshutil.go:53] new ssh client: &{IP:localhost Port:51209 SSHKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/running-upgrade-835000/id_rsa Username:docker}
	I0408 04:39:34.872941    9654 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 04:39:34.874146    9654 info.go:137] Remote host: Buildroot 2021.02.12
	I0408 04:39:34.874152    9654 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18588-7343/.minikube/addons for local assets ...
	I0408 04:39:34.874220    9654 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18588-7343/.minikube/files for local assets ...
	I0408 04:39:34.874329    9654 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18588-7343/.minikube/files/etc/ssl/certs/77492.pem -> 77492.pem in /etc/ssl/certs
	I0408 04:39:34.874453    9654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 04:39:34.877448    9654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/files/etc/ssl/certs/77492.pem --> /etc/ssl/certs/77492.pem (1708 bytes)
	I0408 04:39:34.883987    9654 start.go:296] duration metric: took 38.615208ms for postStartSetup
	I0408 04:39:34.884000    9654 fix.go:56] duration metric: took 662.847625ms for fixHost
	I0408 04:39:34.884029    9654 main.go:141] libmachine: Using SSH client type: native
	I0408 04:39:34.884143    9654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101089c80] 0x10108c4e0 <nil>  [] 0s} localhost 51209 <nil> <nil>}
	I0408 04:39:34.884150    9654 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 04:39:34.935991    9654 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712576375.018102221
	
	I0408 04:39:34.935999    9654 fix.go:216] guest clock: 1712576375.018102221
	I0408 04:39:34.936003    9654 fix.go:229] Guest: 2024-04-08 04:39:35.018102221 -0700 PDT Remote: 2024-04-08 04:39:34.884002 -0700 PDT m=+0.771919584 (delta=134.100221ms)
	I0408 04:39:34.936015    9654 fix.go:200] guest clock delta is within tolerance: 134.100221ms
	I0408 04:39:34.936017    9654 start.go:83] releasing machines lock for "running-upgrade-835000", held for 714.872625ms
	I0408 04:39:34.936076    9654 ssh_runner.go:195] Run: cat /version.json
	I0408 04:39:34.936084    9654 sshutil.go:53] new ssh client: &{IP:localhost Port:51209 SSHKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/running-upgrade-835000/id_rsa Username:docker}
	I0408 04:39:34.936077    9654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 04:39:34.936108    9654 sshutil.go:53] new ssh client: &{IP:localhost Port:51209 SSHKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/running-upgrade-835000/id_rsa Username:docker}
	W0408 04:39:34.936739    9654 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51319->127.0.0.1:51209: write: broken pipe
	I0408 04:39:34.936759    9654 retry.go:31] will retry after 300.957952ms: ssh: handshake failed: write tcp 127.0.0.1:51319->127.0.0.1:51209: write: broken pipe
	W0408 04:39:34.963028    9654 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0408 04:39:34.963077    9654 ssh_runner.go:195] Run: systemctl --version
	I0408 04:39:34.964742    9654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 04:39:34.966237    9654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 04:39:34.966264    9654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0408 04:39:34.969263    9654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0408 04:39:34.973350    9654 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 04:39:34.973356    9654 start.go:494] detecting cgroup driver to use...
	I0408 04:39:34.973464    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 04:39:34.978731    9654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0408 04:39:34.982033    9654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 04:39:34.985688    9654 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 04:39:34.985728    9654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 04:39:34.988729    9654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 04:39:34.991401    9654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 04:39:34.994533    9654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 04:39:34.997893    9654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 04:39:35.000796    9654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 04:39:35.003683    9654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 04:39:35.006561    9654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 04:39:35.010123    9654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 04:39:35.013110    9654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 04:39:35.015589    9654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 04:39:35.106740    9654 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 04:39:35.113893    9654 start.go:494] detecting cgroup driver to use...
	I0408 04:39:35.113958    9654 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0408 04:39:35.122628    9654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 04:39:35.127442    9654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 04:39:35.133269    9654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 04:39:35.138201    9654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 04:39:35.143147    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 04:39:35.148623    9654 ssh_runner.go:195] Run: which cri-dockerd
	I0408 04:39:35.149852    9654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0408 04:39:35.152663    9654 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0408 04:39:35.157587    9654 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0408 04:39:35.258618    9654 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0408 04:39:35.357166    9654 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0408 04:39:35.357223    9654 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0408 04:39:35.363376    9654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 04:39:35.445492    9654 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 04:39:48.108457    9654 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.663127167s)
	I0408 04:39:48.108528    9654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0408 04:39:48.112877    9654 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0408 04:39:48.120271    9654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 04:39:48.125413    9654 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0408 04:39:48.203706    9654 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0408 04:39:48.288912    9654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 04:39:48.369131    9654 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0408 04:39:48.375417    9654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 04:39:48.379963    9654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 04:39:48.462188    9654 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0408 04:39:48.499809    9654 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0408 04:39:48.499890    9654 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0408 04:39:48.501861    9654 start.go:562] Will wait 60s for crictl version
	I0408 04:39:48.501906    9654 ssh_runner.go:195] Run: which crictl
	I0408 04:39:48.503644    9654 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 04:39:48.515847    9654 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0408 04:39:48.515915    9654 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 04:39:48.528104    9654 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 04:39:48.546356    9654 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0408 04:39:48.546432    9654 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0408 04:39:48.547872    9654 kubeadm.go:877] updating cluster {Name:running-upgrade-835000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51241 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-835000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0408 04:39:48.547911    9654 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0408 04:39:48.547955    9654 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0408 04:39:48.558359    9654 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0408 04:39:48.558376    9654 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0408 04:39:48.558419    9654 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0408 04:39:48.561467    9654 ssh_runner.go:195] Run: which lz4
	I0408 04:39:48.562809    9654 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0408 04:39:48.564041    9654 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 04:39:48.564050    9654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0408 04:39:49.300914    9654 docker.go:649] duration metric: took 738.147ms to copy over tarball
	I0408 04:39:49.300979    9654 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 04:39:50.727571    9654 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.426598667s)
	I0408 04:39:50.727585    9654 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 04:39:50.742999    9654 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0408 04:39:50.746094    9654 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0408 04:39:50.751036    9654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 04:39:50.834273    9654 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 04:39:52.052018    9654 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.21774725s)
	I0408 04:39:52.052108    9654 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0408 04:39:52.064692    9654 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0408 04:39:52.064702    9654 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0408 04:39:52.064707    9654 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 04:39:52.074170    9654 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 04:39:52.074435    9654 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 04:39:52.074558    9654 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0408 04:39:52.074685    9654 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0408 04:39:52.074786    9654 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0408 04:39:52.074944    9654 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0408 04:39:52.075342    9654 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 04:39:52.075684    9654 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0408 04:39:52.083346    9654 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0408 04:39:52.083370    9654 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 04:39:52.083529    9654 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0408 04:39:52.083682    9654 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0408 04:39:52.083881    9654 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 04:39:52.084355    9654 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0408 04:39:52.084522    9654 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 04:39:52.085118    9654 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0408 04:39:52.446672    9654 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0408 04:39:52.453339    9654 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0408 04:39:52.459339    9654 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 04:39:52.460121    9654 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0408 04:39:52.460143    9654 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0408 04:39:52.460168    9654 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0408 04:39:52.467087    9654 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0408 04:39:52.467110    9654 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0408 04:39:52.467169    9654 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0408 04:39:52.473406    9654 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0408 04:39:52.473425    9654 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 04:39:52.473483    9654 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 04:39:52.477318    9654 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0408 04:39:52.477430    9654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0408 04:39:52.485753    9654 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0408 04:39:52.487410    9654 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0408 04:39:52.487510    9654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	W0408 04:39:52.494251    9654 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0408 04:39:52.494389    9654 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0408 04:39:52.500550    9654 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0408 04:39:52.511830    9654 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0408 04:39:52.511849    9654 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 04:39:52.511896    9654 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0408 04:39:52.511906    9654 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0408 04:39:52.511910    9654 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0408 04:39:52.511932    9654 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0408 04:39:52.511937    9654 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0408 04:39:52.511948    9654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0408 04:39:52.511955    9654 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0408 04:39:52.511962    9654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0408 04:39:52.514177    9654 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0408 04:39:52.532811    9654 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0408 04:39:52.532824    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0408 04:39:52.545104    9654 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0408 04:39:52.545123    9654 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0408 04:39:52.545218    9654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0408 04:39:52.564790    9654 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0408 04:39:52.564819    9654 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0408 04:39:52.564881    9654 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0408 04:39:52.584047    9654 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0408 04:39:52.610784    9654 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0408 04:39:52.610815    9654 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0408 04:39:52.610837    9654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0408 04:39:52.611014    9654 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0408 04:39:52.628321    9654 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0408 04:39:52.628347    9654 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0408 04:39:52.628398    9654 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0408 04:39:52.676687    9654 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0408 04:39:52.702999    9654 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0408 04:39:52.703013    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0408 04:39:52.818011    9654 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0408 04:39:52.818761    9654 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0408 04:39:52.818769    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0408 04:39:52.931226    9654 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0408 04:39:52.931326    9654 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 04:39:52.953116    9654 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0408 04:39:52.953155    9654 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0408 04:39:52.953174    9654 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 04:39:52.953231    9654 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 04:39:53.602157    9654 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0408 04:39:53.602592    9654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0408 04:39:53.608012    9654 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0408 04:39:53.608077    9654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0408 04:39:53.658205    9654 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0408 04:39:53.658221    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0408 04:39:53.894457    9654 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0408 04:39:53.894498    9654 cache_images.go:92] duration metric: took 1.829811625s to LoadCachedImages
	W0408 04:39:53.894534    9654 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0408 04:39:53.894539    9654 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0408 04:39:53.894600    9654 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-835000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-835000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 04:39:53.894660    9654 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0408 04:39:53.908460    9654 cni.go:84] Creating CNI manager for ""
	I0408 04:39:53.908472    9654 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:39:53.908483    9654 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 04:39:53.908491    9654 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-835000 NodeName:running-upgrade-835000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 04:39:53.908552    9654 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-835000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 04:39:53.908606    9654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0408 04:39:53.911925    9654 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 04:39:53.911952    9654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 04:39:53.915148    9654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0408 04:39:53.920346    9654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 04:39:53.925379    9654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0408 04:39:53.930570    9654 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0408 04:39:53.932113    9654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 04:39:54.013606    9654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 04:39:54.018471    9654 certs.go:68] Setting up /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000 for IP: 10.0.2.15
	I0408 04:39:54.018478    9654 certs.go:194] generating shared ca certs ...
	I0408 04:39:54.018485    9654 certs.go:226] acquiring lock for ca certs: {Name:mkf571f644c202bb973f8b5774e57a066bda7c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:39:54.018743    9654 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/ca.key
	I0408 04:39:54.018794    9654 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/proxy-client-ca.key
	I0408 04:39:54.018802    9654 certs.go:256] generating profile certs ...
	I0408 04:39:54.018856    9654 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/client.key
	I0408 04:39:54.018867    9654 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/apiserver.key.fd76ea04
	I0408 04:39:54.018879    9654 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/apiserver.crt.fd76ea04 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0408 04:39:54.134965    9654 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/apiserver.crt.fd76ea04 ...
	I0408 04:39:54.134971    9654 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/apiserver.crt.fd76ea04: {Name:mkdc73db369c1d1c39c8e9a3e35366402645c77a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:39:54.135198    9654 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/apiserver.key.fd76ea04 ...
	I0408 04:39:54.141407    9654 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/apiserver.key.fd76ea04: {Name:mkfb3c5abb35763b8bb80a5c1d43fda42ff1ba27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:39:54.141627    9654 certs.go:381] copying /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/apiserver.crt.fd76ea04 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/apiserver.crt
	I0408 04:39:54.141879    9654 certs.go:385] copying /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/apiserver.key.fd76ea04 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/apiserver.key
	I0408 04:39:54.142038    9654 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/proxy-client.key
	I0408 04:39:54.142150    9654 certs.go:484] found cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/7749.pem (1338 bytes)
	W0408 04:39:54.142180    9654 certs.go:480] ignoring /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/7749_empty.pem, impossibly tiny 0 bytes
	I0408 04:39:54.142185    9654 certs.go:484] found cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca-key.pem (1675 bytes)
	I0408 04:39:54.142205    9654 certs.go:484] found cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem (1078 bytes)
	I0408 04:39:54.142223    9654 certs.go:484] found cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem (1123 bytes)
	I0408 04:39:54.142242    9654 certs.go:484] found cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/key.pem (1679 bytes)
	I0408 04:39:54.142282    9654 certs.go:484] found cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/files/etc/ssl/certs/77492.pem (1708 bytes)
	I0408 04:39:54.142601    9654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 04:39:54.150199    9654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0408 04:39:54.157310    9654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 04:39:54.164336    9654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 04:39:54.171755    9654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0408 04:39:54.180255    9654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 04:39:54.202234    9654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 04:39:54.224238    9654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 04:39:54.245340    9654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 04:39:54.255385    9654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/7749.pem --> /usr/share/ca-certificates/7749.pem (1338 bytes)
	I0408 04:39:54.262338    9654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/files/etc/ssl/certs/77492.pem --> /usr/share/ca-certificates/77492.pem (1708 bytes)
	I0408 04:39:54.280811    9654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 04:39:54.286636    9654 ssh_runner.go:195] Run: openssl version
	I0408 04:39:54.290168    9654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7749.pem && ln -fs /usr/share/ca-certificates/7749.pem /etc/ssl/certs/7749.pem"
	I0408 04:39:54.293959    9654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7749.pem
	I0408 04:39:54.295618    9654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:27 /usr/share/ca-certificates/7749.pem
	I0408 04:39:54.295648    9654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7749.pem
	I0408 04:39:54.297671    9654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7749.pem /etc/ssl/certs/51391683.0"
	I0408 04:39:54.305146    9654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77492.pem && ln -fs /usr/share/ca-certificates/77492.pem /etc/ssl/certs/77492.pem"
	I0408 04:39:54.308956    9654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77492.pem
	I0408 04:39:54.310525    9654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:27 /usr/share/ca-certificates/77492.pem
	I0408 04:39:54.310546    9654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77492.pem
	I0408 04:39:54.312687    9654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77492.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 04:39:54.317531    9654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 04:39:54.327657    9654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 04:39:54.330817    9654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:39 /usr/share/ca-certificates/minikubeCA.pem
	I0408 04:39:54.330846    9654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 04:39:54.335449    9654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 04:39:54.338876    9654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 04:39:54.347438    9654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 04:39:54.360262    9654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 04:39:54.366622    9654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 04:39:54.381063    9654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 04:39:54.389953    9654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 04:39:54.395432    9654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 04:39:54.400024    9654 kubeadm.go:391] StartCluster: {Name:running-upgrade-835000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51241 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-835000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0408 04:39:54.400093    9654 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0408 04:39:54.423255    9654 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 04:39:54.427470    9654 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 04:39:54.427478    9654 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 04:39:54.427481    9654 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 04:39:54.427522    9654 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 04:39:54.431401    9654 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 04:39:54.431445    9654 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-835000" does not appear in /Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:39:54.431463    9654 kubeconfig.go:62] /Users/jenkins/minikube-integration/18588-7343/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-835000" cluster setting kubeconfig missing "running-upgrade-835000" context setting]
	I0408 04:39:54.431639    9654 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/kubeconfig: {Name:mk04d6060f19666b377da34a3aa7f8b9bcbb5054 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:39:54.432242    9654 kapi.go:59] client config for running-upgrade-835000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/client.key", CAFile:"/Users/jenkins/minikube-integration/18588-7343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10237f940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0408 04:39:54.433045    9654 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 04:39:54.436796    9654 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-835000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0408 04:39:54.436802    9654 kubeadm.go:1154] stopping kube-system containers ...
	I0408 04:39:54.436857    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0408 04:39:54.457932    9654 docker.go:483] Stopping containers: [5c4ef2616dfd 112bdd60d282 2417bfdb35d8 72e63a2815d0 f02e6d9b9a9f e01027ce17e2 0a4c7ba6bd44 810f08844708 518baeb1a1d2 8854a7b5f3ef bf6e4c5c0a1c c163f376de2c 965d7d5e1c27 1040f0881436 27641d8f6dd6 9f694ad74a37]
	I0408 04:39:54.458017    9654 ssh_runner.go:195] Run: docker stop 5c4ef2616dfd 112bdd60d282 2417bfdb35d8 72e63a2815d0 f02e6d9b9a9f e01027ce17e2 0a4c7ba6bd44 810f08844708 518baeb1a1d2 8854a7b5f3ef bf6e4c5c0a1c c163f376de2c 965d7d5e1c27 1040f0881436 27641d8f6dd6 9f694ad74a37
	I0408 04:39:54.548413    9654 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 04:39:54.642445    9654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 04:39:54.646623    9654 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 Apr  8 11:39 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Apr  8 11:39 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Apr  8 11:39 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Apr  8 11:39 /etc/kubernetes/scheduler.conf
	
	I0408 04:39:54.646661    9654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/admin.conf
	I0408 04:39:54.650124    9654 kubeadm.go:162] "https://control-plane.minikube.internal:51241" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0408 04:39:54.650149    9654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 04:39:54.653476    9654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/kubelet.conf
	I0408 04:39:54.656705    9654 kubeadm.go:162] "https://control-plane.minikube.internal:51241" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0408 04:39:54.656729    9654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 04:39:54.660064    9654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/controller-manager.conf
	I0408 04:39:54.663024    9654 kubeadm.go:162] "https://control-plane.minikube.internal:51241" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0408 04:39:54.663047    9654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 04:39:54.665836    9654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/scheduler.conf
	I0408 04:39:54.668598    9654 kubeadm.go:162] "https://control-plane.minikube.internal:51241" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0408 04:39:54.668619    9654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 04:39:54.671376    9654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 04:39:54.674060    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 04:39:54.693937    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 04:39:55.178901    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 04:39:55.374684    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 04:39:55.397782    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 04:39:55.422231    9654 api_server.go:52] waiting for apiserver process to appear ...
	I0408 04:39:55.422306    9654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 04:39:55.924728    9654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 04:39:56.424591    9654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 04:39:56.924391    9654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 04:39:56.928877    9654 api_server.go:72] duration metric: took 1.506667375s to wait for apiserver process to appear ...
	I0408 04:39:56.928889    9654 api_server.go:88] waiting for apiserver healthz status ...
	I0408 04:39:56.928899    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:40:01.931036    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:40:01.931109    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:40:06.931681    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:40:06.931758    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:40:11.932663    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:40:11.932736    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:40:16.933795    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:40:16.933823    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:40:21.934926    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:40:21.935025    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:40:26.937279    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:40:26.937366    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:40:31.939655    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:40:31.939702    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:40:36.941017    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:40:36.941101    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:40:41.943597    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:40:41.943650    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:40:46.944970    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:40:46.945036    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:40:51.946027    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:40:51.946116    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:40:56.948655    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:40:56.949152    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:40:56.986005    9654 logs.go:276] 2 containers: [3f2f145a8f16 fc91683e307d]
	I0408 04:40:56.986170    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:40:57.005876    9654 logs.go:276] 2 containers: [bc0b9c25a9da 8854a7b5f3ef]
	I0408 04:40:57.005999    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:40:57.022731    9654 logs.go:276] 1 containers: [36847e65ba03]
	I0408 04:40:57.022817    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:40:57.039750    9654 logs.go:276] 2 containers: [f52f289e8112 bca1fee77bc3]
	I0408 04:40:57.039818    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:40:57.049912    9654 logs.go:276] 1 containers: [9d86accc4f3c]
	I0408 04:40:57.049994    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:40:57.061144    9654 logs.go:276] 2 containers: [9220cf4363b0 90e3f9b0faaa]
	I0408 04:40:57.061223    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:40:57.071227    9654 logs.go:276] 0 containers: []
	W0408 04:40:57.071238    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:40:57.071305    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:40:57.081476    9654 logs.go:276] 2 containers: [753f3c118640 f9f1de9506cf]
	I0408 04:40:57.081493    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:40:57.081498    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:40:57.093110    9654 logs.go:123] Gathering logs for etcd [8854a7b5f3ef] ...
	I0408 04:40:57.093123    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8854a7b5f3ef"
	I0408 04:40:57.108230    9654 logs.go:123] Gathering logs for coredns [36847e65ba03] ...
	I0408 04:40:57.108243    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36847e65ba03"
	I0408 04:40:57.119632    9654 logs.go:123] Gathering logs for storage-provisioner [f9f1de9506cf] ...
	I0408 04:40:57.119645    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f1de9506cf"
	I0408 04:40:57.130925    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:40:57.130934    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:40:57.158462    9654 logs.go:123] Gathering logs for kube-apiserver [3f2f145a8f16] ...
	I0408 04:40:57.158469    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f145a8f16"
	I0408 04:40:57.172156    9654 logs.go:123] Gathering logs for kube-scheduler [f52f289e8112] ...
	I0408 04:40:57.172167    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f52f289e8112"
	I0408 04:40:57.183715    9654 logs.go:123] Gathering logs for kube-scheduler [bca1fee77bc3] ...
	I0408 04:40:57.183724    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bca1fee77bc3"
	I0408 04:40:57.194855    9654 logs.go:123] Gathering logs for storage-provisioner [753f3c118640] ...
	I0408 04:40:57.194869    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753f3c118640"
	I0408 04:40:57.206989    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:40:57.207000    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:40:57.243814    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:40:57.243910    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:40:57.244413    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:40:57.244418    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:40:57.249236    9654 logs.go:123] Gathering logs for kube-apiserver [fc91683e307d] ...
	I0408 04:40:57.249244    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc91683e307d"
	I0408 04:40:57.260727    9654 logs.go:123] Gathering logs for kube-controller-manager [9220cf4363b0] ...
	I0408 04:40:57.260741    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9220cf4363b0"
	I0408 04:40:57.277983    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:40:57.277996    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:40:57.350129    9654 logs.go:123] Gathering logs for etcd [bc0b9c25a9da] ...
	I0408 04:40:57.350143    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0b9c25a9da"
	I0408 04:40:57.363288    9654 logs.go:123] Gathering logs for kube-proxy [9d86accc4f3c] ...
	I0408 04:40:57.363301    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d86accc4f3c"
	I0408 04:40:57.386747    9654 logs.go:123] Gathering logs for kube-controller-manager [90e3f9b0faaa] ...
	I0408 04:40:57.386760    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90e3f9b0faaa"
	I0408 04:40:57.397434    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:40:57.397447    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:40:57.397474    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:40:57.397486    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:40:57.397491    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:40:57.397495    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:40:57.397497    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:41:07.401188    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:41:12.403469    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:41:12.403946    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:41:12.446104    9654 logs.go:276] 2 containers: [3f2f145a8f16 fc91683e307d]
	I0408 04:41:12.446240    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:41:12.469131    9654 logs.go:276] 2 containers: [bc0b9c25a9da 8854a7b5f3ef]
	I0408 04:41:12.469225    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:41:12.483523    9654 logs.go:276] 1 containers: [36847e65ba03]
	I0408 04:41:12.483601    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:41:12.500688    9654 logs.go:276] 2 containers: [f52f289e8112 bca1fee77bc3]
	I0408 04:41:12.500753    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:41:12.510997    9654 logs.go:276] 1 containers: [9d86accc4f3c]
	I0408 04:41:12.511070    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:41:12.521877    9654 logs.go:276] 2 containers: [9220cf4363b0 90e3f9b0faaa]
	I0408 04:41:12.521953    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:41:12.532586    9654 logs.go:276] 0 containers: []
	W0408 04:41:12.532601    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:41:12.532678    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:41:12.543396    9654 logs.go:276] 2 containers: [753f3c118640 f9f1de9506cf]
	I0408 04:41:12.543419    9654 logs.go:123] Gathering logs for kube-apiserver [3f2f145a8f16] ...
	I0408 04:41:12.543423    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f145a8f16"
	I0408 04:41:12.557887    9654 logs.go:123] Gathering logs for kube-controller-manager [9220cf4363b0] ...
	I0408 04:41:12.557899    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9220cf4363b0"
	I0408 04:41:12.575433    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:41:12.575442    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:41:12.579796    9654 logs.go:123] Gathering logs for kube-apiserver [fc91683e307d] ...
	I0408 04:41:12.579803    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc91683e307d"
	I0408 04:41:12.595333    9654 logs.go:123] Gathering logs for kube-scheduler [f52f289e8112] ...
	I0408 04:41:12.595345    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f52f289e8112"
	I0408 04:41:12.606971    9654 logs.go:123] Gathering logs for kube-proxy [9d86accc4f3c] ...
	I0408 04:41:12.606984    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d86accc4f3c"
	I0408 04:41:12.618351    9654 logs.go:123] Gathering logs for storage-provisioner [753f3c118640] ...
	I0408 04:41:12.618360    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753f3c118640"
	I0408 04:41:12.629496    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:41:12.629526    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:41:12.667120    9654 logs.go:123] Gathering logs for etcd [bc0b9c25a9da] ...
	I0408 04:41:12.667130    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0b9c25a9da"
	I0408 04:41:12.686980    9654 logs.go:123] Gathering logs for etcd [8854a7b5f3ef] ...
	I0408 04:41:12.686990    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8854a7b5f3ef"
	I0408 04:41:12.701098    9654 logs.go:123] Gathering logs for kube-scheduler [bca1fee77bc3] ...
	I0408 04:41:12.701109    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bca1fee77bc3"
	I0408 04:41:12.712395    9654 logs.go:123] Gathering logs for kube-controller-manager [90e3f9b0faaa] ...
	I0408 04:41:12.712408    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90e3f9b0faaa"
	I0408 04:41:12.724503    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:41:12.724515    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:41:12.736872    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:41:12.736884    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:41:12.774292    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:41:12.774383    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:41:12.774899    9654 logs.go:123] Gathering logs for storage-provisioner [f9f1de9506cf] ...
	I0408 04:41:12.774904    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f1de9506cf"
	I0408 04:41:12.785878    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:41:12.785888    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:41:12.810367    9654 logs.go:123] Gathering logs for coredns [36847e65ba03] ...
	I0408 04:41:12.810375    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36847e65ba03"
	I0408 04:41:12.824514    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:41:12.824525    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:41:12.824550    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:41:12.824554    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:41:12.824567    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:41:12.824575    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:41:12.824583    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:41:22.828737    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:41:27.831447    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:41:27.831627    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:41:27.854433    9654 logs.go:276] 2 containers: [3f2f145a8f16 fc91683e307d]
	I0408 04:41:27.854545    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:41:27.870090    9654 logs.go:276] 2 containers: [bc0b9c25a9da 8854a7b5f3ef]
	I0408 04:41:27.870173    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:41:27.882786    9654 logs.go:276] 1 containers: [36847e65ba03]
	I0408 04:41:27.882855    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:41:27.893685    9654 logs.go:276] 2 containers: [f52f289e8112 bca1fee77bc3]
	I0408 04:41:27.893758    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:41:27.903959    9654 logs.go:276] 1 containers: [9d86accc4f3c]
	I0408 04:41:27.904033    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:41:27.914314    9654 logs.go:276] 2 containers: [9220cf4363b0 90e3f9b0faaa]
	I0408 04:41:27.914399    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:41:27.924214    9654 logs.go:276] 0 containers: []
	W0408 04:41:27.924225    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:41:27.924278    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:41:27.934624    9654 logs.go:276] 2 containers: [753f3c118640 f9f1de9506cf]
	I0408 04:41:27.934641    9654 logs.go:123] Gathering logs for etcd [bc0b9c25a9da] ...
	I0408 04:41:27.934646    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0b9c25a9da"
	I0408 04:41:27.948539    9654 logs.go:123] Gathering logs for kube-controller-manager [9220cf4363b0] ...
	I0408 04:41:27.948550    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9220cf4363b0"
	I0408 04:41:27.966198    9654 logs.go:123] Gathering logs for kube-apiserver [3f2f145a8f16] ...
	I0408 04:41:27.966210    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f145a8f16"
	I0408 04:41:27.980062    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:41:27.980072    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:41:28.015089    9654 logs.go:123] Gathering logs for kube-apiserver [fc91683e307d] ...
	I0408 04:41:28.015101    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc91683e307d"
	I0408 04:41:28.026606    9654 logs.go:123] Gathering logs for etcd [8854a7b5f3ef] ...
	I0408 04:41:28.026617    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8854a7b5f3ef"
	I0408 04:41:28.041728    9654 logs.go:123] Gathering logs for kube-scheduler [f52f289e8112] ...
	I0408 04:41:28.041739    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f52f289e8112"
	I0408 04:41:28.054003    9654 logs.go:123] Gathering logs for kube-proxy [9d86accc4f3c] ...
	I0408 04:41:28.054015    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d86accc4f3c"
	I0408 04:41:28.065760    9654 logs.go:123] Gathering logs for storage-provisioner [f9f1de9506cf] ...
	I0408 04:41:28.065772    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f1de9506cf"
	I0408 04:41:28.084523    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:41:28.084535    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:41:28.120884    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:41:28.120977    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:41:28.121513    9654 logs.go:123] Gathering logs for storage-provisioner [753f3c118640] ...
	I0408 04:41:28.121517    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753f3c118640"
	I0408 04:41:28.132694    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:41:28.132704    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:41:28.144581    9654 logs.go:123] Gathering logs for kube-controller-manager [90e3f9b0faaa] ...
	I0408 04:41:28.144593    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90e3f9b0faaa"
	I0408 04:41:28.157030    9654 logs.go:123] Gathering logs for coredns [36847e65ba03] ...
	I0408 04:41:28.157042    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36847e65ba03"
	I0408 04:41:28.168060    9654 logs.go:123] Gathering logs for kube-scheduler [bca1fee77bc3] ...
	I0408 04:41:28.168070    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bca1fee77bc3"
	I0408 04:41:28.179362    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:41:28.179377    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:41:28.206496    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:41:28.206505    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:41:28.211115    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:41:28.211124    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:41:28.211166    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:41:28.211172    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:41:28.211175    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:41:28.211179    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:41:28.211182    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:41:38.213700    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:41:43.214897    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:41:43.215223    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:41:43.242509    9654 logs.go:276] 2 containers: [3f2f145a8f16 fc91683e307d]
	I0408 04:41:43.242631    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:41:43.260271    9654 logs.go:276] 2 containers: [bc0b9c25a9da 8854a7b5f3ef]
	I0408 04:41:43.260363    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:41:43.273636    9654 logs.go:276] 1 containers: [36847e65ba03]
	I0408 04:41:43.273704    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:41:43.287871    9654 logs.go:276] 2 containers: [f52f289e8112 bca1fee77bc3]
	I0408 04:41:43.287947    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:41:43.298509    9654 logs.go:276] 1 containers: [9d86accc4f3c]
	I0408 04:41:43.298577    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:41:43.308743    9654 logs.go:276] 2 containers: [9220cf4363b0 90e3f9b0faaa]
	I0408 04:41:43.308808    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:41:43.318389    9654 logs.go:276] 0 containers: []
	W0408 04:41:43.318398    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:41:43.318447    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:41:43.340308    9654 logs.go:276] 2 containers: [753f3c118640 f9f1de9506cf]
	I0408 04:41:43.340326    9654 logs.go:123] Gathering logs for kube-proxy [9d86accc4f3c] ...
	I0408 04:41:43.340332    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d86accc4f3c"
	I0408 04:41:43.351950    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:41:43.351959    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:41:43.363623    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:41:43.363635    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:41:43.367944    9654 logs.go:123] Gathering logs for etcd [bc0b9c25a9da] ...
	I0408 04:41:43.367953    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0b9c25a9da"
	I0408 04:41:43.381182    9654 logs.go:123] Gathering logs for kube-controller-manager [9220cf4363b0] ...
	I0408 04:41:43.381192    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9220cf4363b0"
	I0408 04:41:43.402303    9654 logs.go:123] Gathering logs for storage-provisioner [f9f1de9506cf] ...
	I0408 04:41:43.402315    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f1de9506cf"
	I0408 04:41:43.413589    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:41:43.413599    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:41:43.438064    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:41:43.438071    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:41:43.472658    9654 logs.go:123] Gathering logs for kube-apiserver [fc91683e307d] ...
	I0408 04:41:43.472671    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc91683e307d"
	I0408 04:41:43.484743    9654 logs.go:123] Gathering logs for coredns [36847e65ba03] ...
	I0408 04:41:43.484756    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36847e65ba03"
	I0408 04:41:43.496047    9654 logs.go:123] Gathering logs for kube-scheduler [bca1fee77bc3] ...
	I0408 04:41:43.496059    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bca1fee77bc3"
	I0408 04:41:43.508243    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:41:43.508256    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:41:43.544327    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:41:43.544420    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:41:43.544933    9654 logs.go:123] Gathering logs for etcd [8854a7b5f3ef] ...
	I0408 04:41:43.544938    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8854a7b5f3ef"
	I0408 04:41:43.559280    9654 logs.go:123] Gathering logs for kube-scheduler [f52f289e8112] ...
	I0408 04:41:43.559291    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f52f289e8112"
	I0408 04:41:43.571158    9654 logs.go:123] Gathering logs for kube-controller-manager [90e3f9b0faaa] ...
	I0408 04:41:43.571172    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90e3f9b0faaa"
	I0408 04:41:43.588335    9654 logs.go:123] Gathering logs for storage-provisioner [753f3c118640] ...
	I0408 04:41:43.588347    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753f3c118640"
	I0408 04:41:43.600107    9654 logs.go:123] Gathering logs for kube-apiserver [3f2f145a8f16] ...
	I0408 04:41:43.600119    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f145a8f16"
	I0408 04:41:43.613641    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:41:43.613654    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:41:43.613678    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:41:43.613681    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:41:43.613685    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:41:43.613728    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:41:43.613731    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:41:53.615237    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:41:58.618066    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:41:58.618527    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:41:58.656064    9654 logs.go:276] 2 containers: [3f2f145a8f16 fc91683e307d]
	I0408 04:41:58.656242    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:41:58.677782    9654 logs.go:276] 2 containers: [bc0b9c25a9da 8854a7b5f3ef]
	I0408 04:41:58.677879    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:41:58.693045    9654 logs.go:276] 1 containers: [36847e65ba03]
	I0408 04:41:58.693109    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:41:58.705260    9654 logs.go:276] 2 containers: [f52f289e8112 bca1fee77bc3]
	I0408 04:41:58.705331    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:41:58.715690    9654 logs.go:276] 1 containers: [9d86accc4f3c]
	I0408 04:41:58.715756    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:41:58.726164    9654 logs.go:276] 2 containers: [9220cf4363b0 90e3f9b0faaa]
	I0408 04:41:58.726224    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:41:58.736623    9654 logs.go:276] 0 containers: []
	W0408 04:41:58.736643    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:41:58.736700    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:41:58.747288    9654 logs.go:276] 2 containers: [753f3c118640 f9f1de9506cf]
	I0408 04:41:58.747305    9654 logs.go:123] Gathering logs for kube-apiserver [fc91683e307d] ...
	I0408 04:41:58.747310    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc91683e307d"
	I0408 04:41:58.758637    9654 logs.go:123] Gathering logs for coredns [36847e65ba03] ...
	I0408 04:41:58.758649    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36847e65ba03"
	I0408 04:41:58.769958    9654 logs.go:123] Gathering logs for kube-scheduler [f52f289e8112] ...
	I0408 04:41:58.769969    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f52f289e8112"
	I0408 04:41:58.781291    9654 logs.go:123] Gathering logs for kube-proxy [9d86accc4f3c] ...
	I0408 04:41:58.781303    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d86accc4f3c"
	I0408 04:41:58.793183    9654 logs.go:123] Gathering logs for kube-controller-manager [90e3f9b0faaa] ...
	I0408 04:41:58.793196    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90e3f9b0faaa"
	I0408 04:41:58.804162    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:41:58.804175    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:41:58.818065    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:41:58.818077    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:41:58.856868    9654 logs.go:123] Gathering logs for kube-apiserver [3f2f145a8f16] ...
	I0408 04:41:58.856881    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f145a8f16"
	I0408 04:41:58.871069    9654 logs.go:123] Gathering logs for etcd [bc0b9c25a9da] ...
	I0408 04:41:58.871081    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0b9c25a9da"
	I0408 04:41:58.884716    9654 logs.go:123] Gathering logs for etcd [8854a7b5f3ef] ...
	I0408 04:41:58.884725    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8854a7b5f3ef"
	I0408 04:41:58.899030    9654 logs.go:123] Gathering logs for kube-scheduler [bca1fee77bc3] ...
	I0408 04:41:58.899043    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bca1fee77bc3"
	I0408 04:41:58.910243    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:41:58.910256    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:41:58.936657    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:41:58.936668    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:41:58.940765    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:41:58.940775    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:41:58.976314    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:41:58.976406    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:41:58.976909    9654 logs.go:123] Gathering logs for kube-controller-manager [9220cf4363b0] ...
	I0408 04:41:58.976913    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9220cf4363b0"
	I0408 04:41:58.994413    9654 logs.go:123] Gathering logs for storage-provisioner [753f3c118640] ...
	I0408 04:41:58.994422    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753f3c118640"
	I0408 04:41:59.006163    9654 logs.go:123] Gathering logs for storage-provisioner [f9f1de9506cf] ...
	I0408 04:41:59.006173    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f1de9506cf"
	I0408 04:41:59.017411    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:41:59.017422    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:41:59.017449    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:41:59.017483    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:41:59.017489    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:41:59.017493    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:41:59.017497    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:42:09.021658    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:42:14.024200    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:42:14.024334    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:42:14.035550    9654 logs.go:276] 2 containers: [3f2f145a8f16 fc91683e307d]
	I0408 04:42:14.035629    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:42:14.046959    9654 logs.go:276] 2 containers: [bc0b9c25a9da 8854a7b5f3ef]
	I0408 04:42:14.047039    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:42:14.058494    9654 logs.go:276] 1 containers: [36847e65ba03]
	I0408 04:42:14.058568    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:42:14.069263    9654 logs.go:276] 2 containers: [f52f289e8112 bca1fee77bc3]
	I0408 04:42:14.069337    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:42:14.079749    9654 logs.go:276] 1 containers: [9d86accc4f3c]
	I0408 04:42:14.079825    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:42:14.099635    9654 logs.go:276] 2 containers: [9220cf4363b0 90e3f9b0faaa]
	I0408 04:42:14.099708    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:42:14.110478    9654 logs.go:276] 0 containers: []
	W0408 04:42:14.110492    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:42:14.110550    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:42:14.121146    9654 logs.go:276] 2 containers: [753f3c118640 f9f1de9506cf]
	I0408 04:42:14.121164    9654 logs.go:123] Gathering logs for storage-provisioner [f9f1de9506cf] ...
	I0408 04:42:14.121169    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f1de9506cf"
	I0408 04:42:14.132899    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:42:14.132909    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:42:14.156770    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:42:14.156777    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:42:14.160905    9654 logs.go:123] Gathering logs for kube-apiserver [fc91683e307d] ...
	I0408 04:42:14.160913    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc91683e307d"
	I0408 04:42:14.178359    9654 logs.go:123] Gathering logs for etcd [8854a7b5f3ef] ...
	I0408 04:42:14.178375    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8854a7b5f3ef"
	I0408 04:42:14.193401    9654 logs.go:123] Gathering logs for coredns [36847e65ba03] ...
	I0408 04:42:14.193411    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36847e65ba03"
	I0408 04:42:14.205112    9654 logs.go:123] Gathering logs for kube-scheduler [bca1fee77bc3] ...
	I0408 04:42:14.205126    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bca1fee77bc3"
	I0408 04:42:14.216380    9654 logs.go:123] Gathering logs for storage-provisioner [753f3c118640] ...
	I0408 04:42:14.216394    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753f3c118640"
	I0408 04:42:14.227809    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:42:14.227820    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:42:14.263783    9654 logs.go:123] Gathering logs for kube-scheduler [f52f289e8112] ...
	I0408 04:42:14.263795    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f52f289e8112"
	I0408 04:42:14.275851    9654 logs.go:123] Gathering logs for kube-controller-manager [90e3f9b0faaa] ...
	I0408 04:42:14.275862    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90e3f9b0faaa"
	I0408 04:42:14.289388    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:42:14.289402    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:42:14.327436    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:42:14.327531    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:42:14.328049    9654 logs.go:123] Gathering logs for kube-apiserver [3f2f145a8f16] ...
	I0408 04:42:14.328053    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f145a8f16"
	I0408 04:42:14.345096    9654 logs.go:123] Gathering logs for etcd [bc0b9c25a9da] ...
	I0408 04:42:14.345110    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0b9c25a9da"
	I0408 04:42:14.359712    9654 logs.go:123] Gathering logs for kube-proxy [9d86accc4f3c] ...
	I0408 04:42:14.359722    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d86accc4f3c"
	I0408 04:42:14.371674    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:42:14.371684    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:42:14.385786    9654 logs.go:123] Gathering logs for kube-controller-manager [9220cf4363b0] ...
	I0408 04:42:14.385797    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9220cf4363b0"
	I0408 04:42:14.403801    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:42:14.403812    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:42:14.403839    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:42:14.403843    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:42:14.403847    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:42:14.403851    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:42:14.403853    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:42:24.407979    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:42:29.410689    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:42:29.410808    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:42:29.422545    9654 logs.go:276] 2 containers: [3f2f145a8f16 fc91683e307d]
	I0408 04:42:29.422625    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:42:29.434285    9654 logs.go:276] 2 containers: [bc0b9c25a9da 8854a7b5f3ef]
	I0408 04:42:29.434353    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:42:29.445770    9654 logs.go:276] 1 containers: [36847e65ba03]
	I0408 04:42:29.445845    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:42:29.457369    9654 logs.go:276] 2 containers: [f52f289e8112 bca1fee77bc3]
	I0408 04:42:29.457450    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:42:29.468415    9654 logs.go:276] 1 containers: [9d86accc4f3c]
	I0408 04:42:29.468504    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:42:29.479992    9654 logs.go:276] 2 containers: [9220cf4363b0 90e3f9b0faaa]
	I0408 04:42:29.480073    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:42:29.493213    9654 logs.go:276] 0 containers: []
	W0408 04:42:29.493224    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:42:29.493301    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:42:29.509576    9654 logs.go:276] 2 containers: [753f3c118640 f9f1de9506cf]
	I0408 04:42:29.509597    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:42:29.509601    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:42:29.548612    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:42:29.548711    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:42:29.549222    9654 logs.go:123] Gathering logs for kube-apiserver [3f2f145a8f16] ...
	I0408 04:42:29.549231    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f145a8f16"
	I0408 04:42:29.563382    9654 logs.go:123] Gathering logs for etcd [bc0b9c25a9da] ...
	I0408 04:42:29.563397    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0b9c25a9da"
	I0408 04:42:29.582174    9654 logs.go:123] Gathering logs for etcd [8854a7b5f3ef] ...
	I0408 04:42:29.582187    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8854a7b5f3ef"
	I0408 04:42:29.596813    9654 logs.go:123] Gathering logs for kube-scheduler [f52f289e8112] ...
	I0408 04:42:29.596828    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f52f289e8112"
	I0408 04:42:29.609163    9654 logs.go:123] Gathering logs for kube-controller-manager [90e3f9b0faaa] ...
	I0408 04:42:29.609181    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90e3f9b0faaa"
	I0408 04:42:29.620855    9654 logs.go:123] Gathering logs for coredns [36847e65ba03] ...
	I0408 04:42:29.620872    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36847e65ba03"
	I0408 04:42:29.631618    9654 logs.go:123] Gathering logs for kube-proxy [9d86accc4f3c] ...
	I0408 04:42:29.631631    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d86accc4f3c"
	I0408 04:42:29.644195    9654 logs.go:123] Gathering logs for storage-provisioner [f9f1de9506cf] ...
	I0408 04:42:29.644207    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f1de9506cf"
	I0408 04:42:29.656126    9654 logs.go:123] Gathering logs for kube-apiserver [fc91683e307d] ...
	I0408 04:42:29.656141    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc91683e307d"
	I0408 04:42:29.667867    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:42:29.667879    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:42:29.680605    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:42:29.680618    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:42:29.685437    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:42:29.685444    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:42:29.722335    9654 logs.go:123] Gathering logs for kube-scheduler [bca1fee77bc3] ...
	I0408 04:42:29.722348    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bca1fee77bc3"
	I0408 04:42:29.738229    9654 logs.go:123] Gathering logs for kube-controller-manager [9220cf4363b0] ...
	I0408 04:42:29.738243    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9220cf4363b0"
	I0408 04:42:29.756614    9654 logs.go:123] Gathering logs for storage-provisioner [753f3c118640] ...
	I0408 04:42:29.756626    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753f3c118640"
	I0408 04:42:29.768355    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:42:29.768369    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:42:29.791565    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:42:29.791573    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:42:29.791596    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:42:29.791600    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:42:29.791604    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:42:29.791608    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:42:29.791611    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:42:39.795302    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:42:44.797506    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:42:44.797694    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:42:44.812995    9654 logs.go:276] 2 containers: [3f2f145a8f16 fc91683e307d]
	I0408 04:42:44.813080    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:42:44.824950    9654 logs.go:276] 2 containers: [bc0b9c25a9da 8854a7b5f3ef]
	I0408 04:42:44.825027    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:42:44.835719    9654 logs.go:276] 1 containers: [36847e65ba03]
	I0408 04:42:44.835788    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:42:44.846523    9654 logs.go:276] 2 containers: [f52f289e8112 bca1fee77bc3]
	I0408 04:42:44.846597    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:42:44.857319    9654 logs.go:276] 1 containers: [9d86accc4f3c]
	I0408 04:42:44.857393    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:42:44.867797    9654 logs.go:276] 2 containers: [9220cf4363b0 90e3f9b0faaa]
	I0408 04:42:44.867864    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:42:44.878402    9654 logs.go:276] 0 containers: []
	W0408 04:42:44.878413    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:42:44.878474    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:42:44.888571    9654 logs.go:276] 2 containers: [753f3c118640 f9f1de9506cf]
	I0408 04:42:44.888589    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:42:44.888595    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:42:44.900386    9654 logs.go:123] Gathering logs for coredns [36847e65ba03] ...
	I0408 04:42:44.900400    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36847e65ba03"
	I0408 04:42:44.912979    9654 logs.go:123] Gathering logs for kube-controller-manager [90e3f9b0faaa] ...
	I0408 04:42:44.912994    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90e3f9b0faaa"
	I0408 04:42:44.924180    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:42:44.924192    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:42:44.947216    9654 logs.go:123] Gathering logs for kube-scheduler [f52f289e8112] ...
	I0408 04:42:44.947223    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f52f289e8112"
	I0408 04:42:44.959296    9654 logs.go:123] Gathering logs for kube-scheduler [bca1fee77bc3] ...
	I0408 04:42:44.959311    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bca1fee77bc3"
	I0408 04:42:44.973646    9654 logs.go:123] Gathering logs for kube-controller-manager [9220cf4363b0] ...
	I0408 04:42:44.973660    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9220cf4363b0"
	I0408 04:42:44.990694    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:42:44.990704    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:42:45.026727    9654 logs.go:123] Gathering logs for etcd [bc0b9c25a9da] ...
	I0408 04:42:45.026740    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0b9c25a9da"
	I0408 04:42:45.040293    9654 logs.go:123] Gathering logs for etcd [8854a7b5f3ef] ...
	I0408 04:42:45.040305    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8854a7b5f3ef"
	I0408 04:42:45.055201    9654 logs.go:123] Gathering logs for storage-provisioner [f9f1de9506cf] ...
	I0408 04:42:45.055214    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f1de9506cf"
	I0408 04:42:45.070684    9654 logs.go:123] Gathering logs for kube-apiserver [3f2f145a8f16] ...
	I0408 04:42:45.070696    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f145a8f16"
	I0408 04:42:45.084808    9654 logs.go:123] Gathering logs for kube-apiserver [fc91683e307d] ...
	I0408 04:42:45.084819    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc91683e307d"
	I0408 04:42:45.095632    9654 logs.go:123] Gathering logs for kube-proxy [9d86accc4f3c] ...
	I0408 04:42:45.095644    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d86accc4f3c"
	I0408 04:42:45.107565    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:42:45.107579    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:42:45.144160    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:42:45.144253    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:42:45.144760    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:42:45.144766    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:42:45.148773    9654 logs.go:123] Gathering logs for storage-provisioner [753f3c118640] ...
	I0408 04:42:45.148781    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753f3c118640"
	I0408 04:42:45.160939    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:42:45.160949    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:42:45.160975    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:42:45.160979    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:42:45.160983    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:42:45.160988    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:42:45.160992    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:42:55.165129    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:43:00.167900    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:43:00.168330    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:43:00.211295    9654 logs.go:276] 2 containers: [3f2f145a8f16 fc91683e307d]
	I0408 04:43:00.211438    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:43:00.234445    9654 logs.go:276] 2 containers: [bc0b9c25a9da 8854a7b5f3ef]
	I0408 04:43:00.234571    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:43:00.250896    9654 logs.go:276] 1 containers: [36847e65ba03]
	I0408 04:43:00.250969    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:43:00.263013    9654 logs.go:276] 2 containers: [f52f289e8112 bca1fee77bc3]
	I0408 04:43:00.263088    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:43:00.274138    9654 logs.go:276] 1 containers: [9d86accc4f3c]
	I0408 04:43:00.274218    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:43:00.286039    9654 logs.go:276] 2 containers: [9220cf4363b0 90e3f9b0faaa]
	I0408 04:43:00.286129    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:43:00.296927    9654 logs.go:276] 0 containers: []
	W0408 04:43:00.296936    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:43:00.296988    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:43:00.307253    9654 logs.go:276] 2 containers: [753f3c118640 f9f1de9506cf]
	I0408 04:43:00.307272    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:43:00.307277    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:43:00.343700    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:43:00.343791    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:43:00.344310    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:43:00.344314    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:43:00.379476    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:43:00.379490    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:43:00.391287    9654 logs.go:123] Gathering logs for kube-scheduler [bca1fee77bc3] ...
	I0408 04:43:00.391301    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bca1fee77bc3"
	I0408 04:43:00.418342    9654 logs.go:123] Gathering logs for kube-proxy [9d86accc4f3c] ...
	I0408 04:43:00.418354    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d86accc4f3c"
	I0408 04:43:00.431933    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:43:00.431943    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:43:00.436695    9654 logs.go:123] Gathering logs for kube-apiserver [3f2f145a8f16] ...
	I0408 04:43:00.436705    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f145a8f16"
	I0408 04:43:00.453496    9654 logs.go:123] Gathering logs for kube-apiserver [fc91683e307d] ...
	I0408 04:43:00.453507    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc91683e307d"
	I0408 04:43:00.466009    9654 logs.go:123] Gathering logs for etcd [8854a7b5f3ef] ...
	I0408 04:43:00.466025    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8854a7b5f3ef"
	I0408 04:43:00.481490    9654 logs.go:123] Gathering logs for coredns [36847e65ba03] ...
	I0408 04:43:00.481501    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36847e65ba03"
	I0408 04:43:00.492940    9654 logs.go:123] Gathering logs for kube-scheduler [f52f289e8112] ...
	I0408 04:43:00.492950    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f52f289e8112"
	I0408 04:43:00.505558    9654 logs.go:123] Gathering logs for kube-controller-manager [90e3f9b0faaa] ...
	I0408 04:43:00.505572    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90e3f9b0faaa"
	I0408 04:43:00.516746    9654 logs.go:123] Gathering logs for etcd [bc0b9c25a9da] ...
	I0408 04:43:00.516757    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0b9c25a9da"
	I0408 04:43:00.534835    9654 logs.go:123] Gathering logs for kube-controller-manager [9220cf4363b0] ...
	I0408 04:43:00.534848    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9220cf4363b0"
	I0408 04:43:00.552277    9654 logs.go:123] Gathering logs for storage-provisioner [753f3c118640] ...
	I0408 04:43:00.552288    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753f3c118640"
	I0408 04:43:00.564129    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:43:00.564140    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:43:00.587878    9654 logs.go:123] Gathering logs for storage-provisioner [f9f1de9506cf] ...
	I0408 04:43:00.587885    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f1de9506cf"
	I0408 04:43:00.599526    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:43:00.599535    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:43:00.599566    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:43:00.599572    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:43:00.599576    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:43:00.599580    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:43:00.599584    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:43:10.602420    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:43:15.604320    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:43:15.604532    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:43:15.617227    9654 logs.go:276] 2 containers: [3f2f145a8f16 fc91683e307d]
	I0408 04:43:15.617308    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:43:15.629049    9654 logs.go:276] 2 containers: [bc0b9c25a9da 8854a7b5f3ef]
	I0408 04:43:15.629130    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:43:15.640347    9654 logs.go:276] 1 containers: [36847e65ba03]
	I0408 04:43:15.640428    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:43:15.651122    9654 logs.go:276] 2 containers: [f52f289e8112 bca1fee77bc3]
	I0408 04:43:15.651199    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:43:15.661584    9654 logs.go:276] 1 containers: [9d86accc4f3c]
	I0408 04:43:15.661656    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:43:15.672564    9654 logs.go:276] 2 containers: [9220cf4363b0 90e3f9b0faaa]
	I0408 04:43:15.672631    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:43:15.682585    9654 logs.go:276] 0 containers: []
	W0408 04:43:15.682598    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:43:15.682650    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:43:15.694781    9654 logs.go:276] 2 containers: [753f3c118640 f9f1de9506cf]
	I0408 04:43:15.694813    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:43:15.694819    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:43:15.732648    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:43:15.732739    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:43:15.733243    9654 logs.go:123] Gathering logs for coredns [36847e65ba03] ...
	I0408 04:43:15.733250    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36847e65ba03"
	I0408 04:43:15.744538    9654 logs.go:123] Gathering logs for storage-provisioner [f9f1de9506cf] ...
	I0408 04:43:15.744551    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f1de9506cf"
	I0408 04:43:15.760248    9654 logs.go:123] Gathering logs for storage-provisioner [753f3c118640] ...
	I0408 04:43:15.760264    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753f3c118640"
	I0408 04:43:15.771639    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:43:15.771653    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:43:15.795255    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:43:15.795262    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:43:15.799779    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:43:15.799787    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:43:15.833034    9654 logs.go:123] Gathering logs for kube-apiserver [3f2f145a8f16] ...
	I0408 04:43:15.833042    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f145a8f16"
	I0408 04:43:15.847446    9654 logs.go:123] Gathering logs for kube-proxy [9d86accc4f3c] ...
	I0408 04:43:15.847455    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d86accc4f3c"
	I0408 04:43:15.863069    9654 logs.go:123] Gathering logs for kube-controller-manager [9220cf4363b0] ...
	I0408 04:43:15.863080    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9220cf4363b0"
	I0408 04:43:15.887449    9654 logs.go:123] Gathering logs for kube-controller-manager [90e3f9b0faaa] ...
	I0408 04:43:15.887459    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90e3f9b0faaa"
	I0408 04:43:15.899399    9654 logs.go:123] Gathering logs for kube-apiserver [fc91683e307d] ...
	I0408 04:43:15.899413    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc91683e307d"
	I0408 04:43:15.914463    9654 logs.go:123] Gathering logs for etcd [8854a7b5f3ef] ...
	I0408 04:43:15.914475    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8854a7b5f3ef"
	I0408 04:43:15.930864    9654 logs.go:123] Gathering logs for kube-scheduler [f52f289e8112] ...
	I0408 04:43:15.930874    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f52f289e8112"
	I0408 04:43:15.943689    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:43:15.943701    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:43:15.955630    9654 logs.go:123] Gathering logs for etcd [bc0b9c25a9da] ...
	I0408 04:43:15.955642    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0b9c25a9da"
	I0408 04:43:15.969498    9654 logs.go:123] Gathering logs for kube-scheduler [bca1fee77bc3] ...
	I0408 04:43:15.969507    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bca1fee77bc3"
	I0408 04:43:15.980570    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:43:15.980585    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:43:15.980615    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:43:15.980620    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:43:15.980624    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:43:15.980627    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:43:15.980630    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:43:25.984575    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:43:30.986703    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:43:30.986822    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:43:30.997418    9654 logs.go:276] 2 containers: [3f2f145a8f16 fc91683e307d]
	I0408 04:43:30.997495    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:43:31.008311    9654 logs.go:276] 2 containers: [bc0b9c25a9da 8854a7b5f3ef]
	I0408 04:43:31.008388    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:43:31.019111    9654 logs.go:276] 1 containers: [36847e65ba03]
	I0408 04:43:31.019182    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:43:31.029471    9654 logs.go:276] 2 containers: [f52f289e8112 bca1fee77bc3]
	I0408 04:43:31.029545    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:43:31.040012    9654 logs.go:276] 1 containers: [9d86accc4f3c]
	I0408 04:43:31.040084    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:43:31.051562    9654 logs.go:276] 2 containers: [9220cf4363b0 90e3f9b0faaa]
	I0408 04:43:31.051632    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:43:31.062201    9654 logs.go:276] 0 containers: []
	W0408 04:43:31.062214    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:43:31.062280    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:43:31.072911    9654 logs.go:276] 2 containers: [753f3c118640 f9f1de9506cf]
	I0408 04:43:31.072948    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:43:31.072955    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:43:31.110847    9654 logs.go:123] Gathering logs for etcd [bc0b9c25a9da] ...
	I0408 04:43:31.110859    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0b9c25a9da"
	I0408 04:43:31.125342    9654 logs.go:123] Gathering logs for kube-controller-manager [9220cf4363b0] ...
	I0408 04:43:31.125356    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9220cf4363b0"
	I0408 04:43:31.142643    9654 logs.go:123] Gathering logs for storage-provisioner [753f3c118640] ...
	I0408 04:43:31.142658    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753f3c118640"
	I0408 04:43:31.155061    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:43:31.155072    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:43:31.178927    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:43:31.178934    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:43:31.218308    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:43:31.218401    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:43:31.218934    9654 logs.go:123] Gathering logs for kube-apiserver [3f2f145a8f16] ...
	I0408 04:43:31.218938    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f145a8f16"
	I0408 04:43:31.232595    9654 logs.go:123] Gathering logs for kube-scheduler [f52f289e8112] ...
	I0408 04:43:31.232607    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f52f289e8112"
	I0408 04:43:31.244301    9654 logs.go:123] Gathering logs for kube-controller-manager [90e3f9b0faaa] ...
	I0408 04:43:31.244312    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90e3f9b0faaa"
	I0408 04:43:31.255638    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:43:31.255651    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:43:31.267330    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:43:31.267342    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:43:31.272164    9654 logs.go:123] Gathering logs for kube-apiserver [fc91683e307d] ...
	I0408 04:43:31.272178    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc91683e307d"
	I0408 04:43:31.284744    9654 logs.go:123] Gathering logs for etcd [8854a7b5f3ef] ...
	I0408 04:43:31.284757    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8854a7b5f3ef"
	I0408 04:43:31.299618    9654 logs.go:123] Gathering logs for coredns [36847e65ba03] ...
	I0408 04:43:31.299628    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36847e65ba03"
	I0408 04:43:31.311220    9654 logs.go:123] Gathering logs for kube-scheduler [bca1fee77bc3] ...
	I0408 04:43:31.311233    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bca1fee77bc3"
	I0408 04:43:31.323132    9654 logs.go:123] Gathering logs for kube-proxy [9d86accc4f3c] ...
	I0408 04:43:31.323144    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d86accc4f3c"
	I0408 04:43:31.336350    9654 logs.go:123] Gathering logs for storage-provisioner [f9f1de9506cf] ...
	I0408 04:43:31.336365    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f1de9506cf"
	I0408 04:43:31.350227    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:43:31.350242    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:43:31.350281    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:43:31.350286    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:43:31.350289    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:43:31.350293    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:43:31.350296    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:43:41.354262    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:43:46.356416    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:43:46.356589    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:43:46.367190    9654 logs.go:276] 2 containers: [3f2f145a8f16 fc91683e307d]
	I0408 04:43:46.367266    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:43:46.378006    9654 logs.go:276] 2 containers: [bc0b9c25a9da 8854a7b5f3ef]
	I0408 04:43:46.378091    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:43:46.388495    9654 logs.go:276] 1 containers: [36847e65ba03]
	I0408 04:43:46.388569    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:43:46.399065    9654 logs.go:276] 2 containers: [f52f289e8112 bca1fee77bc3]
	I0408 04:43:46.399157    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:43:46.410379    9654 logs.go:276] 1 containers: [9d86accc4f3c]
	I0408 04:43:46.410452    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:43:46.420974    9654 logs.go:276] 2 containers: [9220cf4363b0 90e3f9b0faaa]
	I0408 04:43:46.421061    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:43:46.431293    9654 logs.go:276] 0 containers: []
	W0408 04:43:46.431304    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:43:46.431376    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:43:46.442287    9654 logs.go:276] 2 containers: [753f3c118640 f9f1de9506cf]
	I0408 04:43:46.442305    9654 logs.go:123] Gathering logs for kube-apiserver [3f2f145a8f16] ...
	I0408 04:43:46.442321    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f145a8f16"
	I0408 04:43:46.459903    9654 logs.go:123] Gathering logs for etcd [bc0b9c25a9da] ...
	I0408 04:43:46.459914    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0b9c25a9da"
	I0408 04:43:46.473745    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:43:46.473758    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:43:46.478585    9654 logs.go:123] Gathering logs for kube-apiserver [fc91683e307d] ...
	I0408 04:43:46.478591    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc91683e307d"
	I0408 04:43:46.490277    9654 logs.go:123] Gathering logs for kube-controller-manager [9220cf4363b0] ...
	I0408 04:43:46.490291    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9220cf4363b0"
	I0408 04:43:46.507685    9654 logs.go:123] Gathering logs for kube-controller-manager [90e3f9b0faaa] ...
	I0408 04:43:46.507695    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90e3f9b0faaa"
	I0408 04:43:46.522019    9654 logs.go:123] Gathering logs for storage-provisioner [f9f1de9506cf] ...
	I0408 04:43:46.522030    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f1de9506cf"
	I0408 04:43:46.533347    9654 logs.go:123] Gathering logs for etcd [8854a7b5f3ef] ...
	I0408 04:43:46.533358    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8854a7b5f3ef"
	I0408 04:43:46.552569    9654 logs.go:123] Gathering logs for kube-scheduler [f52f289e8112] ...
	I0408 04:43:46.552579    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f52f289e8112"
	I0408 04:43:46.564554    9654 logs.go:123] Gathering logs for kube-scheduler [bca1fee77bc3] ...
	I0408 04:43:46.564564    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bca1fee77bc3"
	I0408 04:43:46.576980    9654 logs.go:123] Gathering logs for kube-proxy [9d86accc4f3c] ...
	I0408 04:43:46.576993    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d86accc4f3c"
	I0408 04:43:46.589355    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:43:46.589366    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:43:46.614168    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:43:46.614183    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:43:46.650692    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:43:46.650785    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:43:46.651319    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:43:46.651330    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:43:46.692342    9654 logs.go:123] Gathering logs for coredns [36847e65ba03] ...
	I0408 04:43:46.692359    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36847e65ba03"
	I0408 04:43:46.709619    9654 logs.go:123] Gathering logs for storage-provisioner [753f3c118640] ...
	I0408 04:43:46.709632    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753f3c118640"
	I0408 04:43:46.723086    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:43:46.723096    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:43:46.734680    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:43:46.734690    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:43:46.734715    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:43:46.734721    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:43:46.734726    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:43:46.734730    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:43:46.734735    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:43:56.735510    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:01.736297    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:01.736371    9654 kubeadm.go:591] duration metric: took 4m7.312357125s to restartPrimaryControlPlane
	W0408 04:44:01.736438    9654 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 04:44:01.736462    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0408 04:44:02.730101    9654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 04:44:02.735356    9654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 04:44:02.738238    9654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 04:44:02.741193    9654 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 04:44:02.741200    9654 kubeadm.go:156] found existing configuration files:
	
	I0408 04:44:02.741226    9654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/admin.conf
	I0408 04:44:02.743857    9654 kubeadm.go:162] "https://control-plane.minikube.internal:51241" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 04:44:02.743882    9654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 04:44:02.746618    9654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/kubelet.conf
	I0408 04:44:02.749755    9654 kubeadm.go:162] "https://control-plane.minikube.internal:51241" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 04:44:02.749774    9654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 04:44:02.752850    9654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/controller-manager.conf
	I0408 04:44:02.755294    9654 kubeadm.go:162] "https://control-plane.minikube.internal:51241" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 04:44:02.755315    9654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 04:44:02.758272    9654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/scheduler.conf
	I0408 04:44:02.761078    9654 kubeadm.go:162] "https://control-plane.minikube.internal:51241" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 04:44:02.761096    9654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 04:44:02.763730    9654 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 04:44:02.780855    9654 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0408 04:44:02.780897    9654 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 04:44:02.831868    9654 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 04:44:02.831938    9654 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 04:44:02.831999    9654 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 04:44:02.880292    9654 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 04:44:02.884500    9654 out.go:204]   - Generating certificates and keys ...
	I0408 04:44:02.884538    9654 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 04:44:02.884567    9654 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 04:44:02.884612    9654 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 04:44:02.884641    9654 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 04:44:02.884680    9654 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 04:44:02.884705    9654 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 04:44:02.884738    9654 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 04:44:02.884770    9654 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 04:44:02.884832    9654 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 04:44:02.884982    9654 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 04:44:02.885009    9654 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 04:44:02.885037    9654 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 04:44:02.976821    9654 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 04:44:03.098756    9654 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 04:44:03.199435    9654 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 04:44:03.293163    9654 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 04:44:03.321421    9654 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 04:44:03.321743    9654 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 04:44:03.321766    9654 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 04:44:03.407932    9654 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 04:44:03.412166    9654 out.go:204]   - Booting up control plane ...
	I0408 04:44:03.412211    9654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 04:44:03.412251    9654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 04:44:03.412287    9654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 04:44:03.412407    9654 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 04:44:03.413513    9654 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 04:44:07.919498    9654 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.505789 seconds
	I0408 04:44:07.919577    9654 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 04:44:07.924552    9654 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 04:44:08.444417    9654 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 04:44:08.444822    9654 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-835000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 04:44:08.948794    9654 kubeadm.go:309] [bootstrap-token] Using token: mc4h03.s6znjht445679d25
	I0408 04:44:08.953343    9654 out.go:204]   - Configuring RBAC rules ...
	I0408 04:44:08.953407    9654 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 04:44:08.953456    9654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 04:44:08.961488    9654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 04:44:08.962591    9654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 04:44:08.963646    9654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 04:44:08.964752    9654 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 04:44:08.970423    9654 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 04:44:09.142274    9654 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 04:44:09.353653    9654 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 04:44:09.354019    9654 kubeadm.go:309] 
	I0408 04:44:09.354048    9654 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 04:44:09.354053    9654 kubeadm.go:309] 
	I0408 04:44:09.354094    9654 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 04:44:09.354098    9654 kubeadm.go:309] 
	I0408 04:44:09.354113    9654 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 04:44:09.354146    9654 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 04:44:09.354171    9654 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 04:44:09.354175    9654 kubeadm.go:309] 
	I0408 04:44:09.354202    9654 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 04:44:09.354207    9654 kubeadm.go:309] 
	I0408 04:44:09.354230    9654 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 04:44:09.354235    9654 kubeadm.go:309] 
	I0408 04:44:09.354266    9654 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 04:44:09.354300    9654 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 04:44:09.354332    9654 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 04:44:09.354334    9654 kubeadm.go:309] 
	I0408 04:44:09.354372    9654 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 04:44:09.354406    9654 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 04:44:09.354408    9654 kubeadm.go:309] 
	I0408 04:44:09.354446    9654 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token mc4h03.s6znjht445679d25 \
	I0408 04:44:09.354491    9654 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:63c1082056c9546e83bc7e238ddca3361d3bc0d4a9173109edd9ba5d9e410231 \
	I0408 04:44:09.354500    9654 kubeadm.go:309] 	--control-plane 
	I0408 04:44:09.354502    9654 kubeadm.go:309] 
	I0408 04:44:09.354540    9654 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 04:44:09.354543    9654 kubeadm.go:309] 
	I0408 04:44:09.354580    9654 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token mc4h03.s6znjht445679d25 \
	I0408 04:44:09.354646    9654 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:63c1082056c9546e83bc7e238ddca3361d3bc0d4a9173109edd9ba5d9e410231 
	I0408 04:44:09.354699    9654 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 04:44:09.354704    9654 cni.go:84] Creating CNI manager for ""
	I0408 04:44:09.354712    9654 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:44:09.358995    9654 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 04:44:09.366926    9654 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 04:44:09.370248    9654 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 04:44:09.375071    9654 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 04:44:09.375148    9654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-835000 minikube.k8s.io/updated_at=2024_04_08T04_44_09_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=running-upgrade-835000 minikube.k8s.io/primary=true
	I0408 04:44:09.375149    9654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 04:44:09.409671    9654 kubeadm.go:1107] duration metric: took 34.552416ms to wait for elevateKubeSystemPrivileges
	I0408 04:44:09.409725    9654 ops.go:34] apiserver oom_adj: -16
	W0408 04:44:09.418535    9654 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 04:44:09.418547    9654 kubeadm.go:393] duration metric: took 4m15.022107042s to StartCluster
	I0408 04:44:09.418559    9654 settings.go:142] acquiring lock: {Name:mkd5c8378547f472aec7259eff81e77b1454222f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:44:09.418700    9654 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:44:09.419051    9654 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/kubeconfig: {Name:mk04d6060f19666b377da34a3aa7f8b9bcbb5054 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:44:09.419285    9654 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:44:09.425827    9654 out.go:177] * Verifying Kubernetes components...
	I0408 04:44:09.419315    9654 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 04:44:09.419578    9654 config.go:182] Loaded profile config "running-upgrade-835000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 04:44:09.437961    9654 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-835000"
	I0408 04:44:09.437972    9654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 04:44:09.437974    9654 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-835000"
	W0408 04:44:09.437977    9654 addons.go:243] addon storage-provisioner should already be in state true
	I0408 04:44:09.437986    9654 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-835000"
	I0408 04:44:09.437990    9654 host.go:66] Checking if "running-upgrade-835000" exists ...
	I0408 04:44:09.437997    9654 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-835000"
	I0408 04:44:09.439115    9654 kapi.go:59] client config for running-upgrade-835000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/client.key", CAFile:"/Users/jenkins/minikube-integration/18588-7343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10237f940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0408 04:44:09.440369    9654 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-835000"
	W0408 04:44:09.440374    9654 addons.go:243] addon default-storageclass should already be in state true
	I0408 04:44:09.440384    9654 host.go:66] Checking if "running-upgrade-835000" exists ...
	I0408 04:44:09.444837    9654 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 04:44:09.452876    9654 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 04:44:09.452884    9654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 04:44:09.452891    9654 sshutil.go:53] new ssh client: &{IP:localhost Port:51209 SSHKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/running-upgrade-835000/id_rsa Username:docker}
	I0408 04:44:09.453543    9654 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 04:44:09.453548    9654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 04:44:09.453552    9654 sshutil.go:53] new ssh client: &{IP:localhost Port:51209 SSHKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/running-upgrade-835000/id_rsa Username:docker}
	I0408 04:44:09.533239    9654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 04:44:09.538595    9654 api_server.go:52] waiting for apiserver process to appear ...
	I0408 04:44:09.538639    9654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 04:44:09.543335    9654 api_server.go:72] duration metric: took 124.035292ms to wait for apiserver process to appear ...
	I0408 04:44:09.543364    9654 api_server.go:88] waiting for apiserver healthz status ...
	I0408 04:44:09.543372    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:09.561629    9654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 04:44:09.566009    9654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 04:44:14.545382    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:14.545411    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:19.545586    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:19.545655    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:24.545862    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:24.545935    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:29.546184    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:29.546207    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:34.546603    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:34.546629    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:39.547192    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:39.547214    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0408 04:44:39.914608    9654 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0408 04:44:39.919335    9654 out.go:177] * Enabled addons: storage-provisioner
	I0408 04:44:39.931262    9654 addons.go:505] duration metric: took 30.512388583s for enable addons: enabled=[storage-provisioner]
	I0408 04:44:44.547927    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:44.547965    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:49.548681    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:49.548727    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:54.549963    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:54.549999    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:59.551636    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:59.551660    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:04.553529    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:04.553555    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:09.555024    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:09.555211    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:45:09.604791    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:45:09.604867    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:45:09.617091    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:45:09.617160    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:45:09.629002    9654 logs.go:276] 2 containers: [89d305451507 238b4c800085]
	I0408 04:45:09.629079    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:45:09.646638    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:45:09.646713    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:45:09.658776    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:45:09.658850    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:45:09.669861    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:45:09.669933    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:45:09.679943    9654 logs.go:276] 0 containers: []
	W0408 04:45:09.679954    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:45:09.680009    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:45:09.693776    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:45:09.693793    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:45:09.693799    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:45:09.705171    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:45:09.705184    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:45:09.717479    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:45:09.717492    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:45:09.734638    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:45:09.734652    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:45:09.748947    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:45:09.748959    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:45:09.753853    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:45:09.753861    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:45:09.814250    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:45:09.814264    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:45:09.829366    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:45:09.829380    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:45:09.845394    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:45:09.845404    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:45:09.857479    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:45:09.857490    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:45:09.881482    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:45:09.881490    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:45:09.893744    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:45:09.893755    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:45:09.911390    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:45:09.911483    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:45:09.927384    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:45:09.927390    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:45:09.942425    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:45:09.942434    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:45:09.942462    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:45:09.942466    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:45:09.942470    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:45:09.942474    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:45:09.942477    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:45:19.946565    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:24.948990    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:24.949243    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:45:24.974593    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:45:24.974720    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:45:24.991922    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:45:24.992027    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:45:25.005348    9654 logs.go:276] 2 containers: [89d305451507 238b4c800085]
	I0408 04:45:25.005425    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:45:25.021559    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:45:25.021636    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:45:25.032555    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:45:25.032630    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:45:25.044825    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:45:25.044897    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:45:25.055249    9654 logs.go:276] 0 containers: []
	W0408 04:45:25.055264    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:45:25.055324    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:45:25.065860    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:45:25.065877    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:45:25.065882    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:45:25.078299    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:45:25.078310    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:45:25.090481    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:45:25.090491    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:45:25.108296    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:45:25.108398    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:45:25.125325    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:45:25.125344    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:45:25.164102    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:45:25.164113    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:45:25.182593    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:45:25.182604    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:45:25.197860    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:45:25.197871    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:45:25.210431    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:45:25.210442    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:45:25.232308    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:45:25.232323    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:45:25.244054    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:45:25.244064    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:45:25.267212    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:45:25.267221    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:45:25.271546    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:45:25.271553    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:45:25.285612    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:45:25.285625    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:45:25.297302    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:45:25.297313    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:45:25.297340    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:45:25.297345    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:45:25.297348    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:45:25.297354    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:45:25.297356    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:45:35.301344    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:40.303583    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:40.303759    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:45:40.316432    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:45:40.316520    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:45:40.327807    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:45:40.327884    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:45:40.338653    9654 logs.go:276] 2 containers: [89d305451507 238b4c800085]
	I0408 04:45:40.338727    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:45:40.349534    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:45:40.349607    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:45:40.360101    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:45:40.360171    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:45:40.371009    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:45:40.371080    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:45:40.381769    9654 logs.go:276] 0 containers: []
	W0408 04:45:40.381780    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:45:40.381838    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:45:40.392271    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:45:40.392299    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:45:40.392305    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:45:40.396920    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:45:40.396929    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:45:40.410975    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:45:40.410985    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:45:40.431138    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:45:40.431149    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:45:40.444710    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:45:40.444723    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:45:40.456450    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:45:40.456463    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:45:40.478786    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:45:40.478797    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:45:40.501994    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:45:40.502002    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:45:40.519535    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:45:40.519626    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:45:40.535822    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:45:40.535828    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:45:40.547071    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:45:40.547083    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:45:40.558941    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:45:40.558952    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:45:40.574110    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:45:40.574123    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:45:40.585895    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:45:40.585905    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:45:40.623013    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:45:40.623024    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:45:40.623051    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:45:40.623056    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:45:40.623059    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:45:40.623063    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:45:40.623066    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:45:50.627089    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:55.629364    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:55.629561    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:45:55.644348    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:45:55.644428    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:45:55.656540    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:45:55.656615    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:45:55.667968    9654 logs.go:276] 2 containers: [89d305451507 238b4c800085]
	I0408 04:45:55.668041    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:45:55.678238    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:45:55.678299    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:45:55.688769    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:45:55.688845    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:45:55.699441    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:45:55.699510    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:45:55.712867    9654 logs.go:276] 0 containers: []
	W0408 04:45:55.712877    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:45:55.712936    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:45:55.723038    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:45:55.723055    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:45:55.723060    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:45:55.737733    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:45:55.737745    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:45:55.749449    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:45:55.749462    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:45:55.764823    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:45:55.764838    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:45:55.780812    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:45:55.780825    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:45:55.798331    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:45:55.798341    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:45:55.811126    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:45:55.811138    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:45:55.815709    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:45:55.815718    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:45:55.857740    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:45:55.857753    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:45:55.868915    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:45:55.868926    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:45:55.894124    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:45:55.894136    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:45:55.905496    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:45:55.905506    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:45:55.924653    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:45:55.924743    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:45:55.940735    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:45:55.940740    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:45:55.961617    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:45:55.961627    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:45:55.961655    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:45:55.961659    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:45:55.961662    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:45:55.961666    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:45:55.961669    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:46:05.965712    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:10.967269    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:10.967405    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:10.983927    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:46:10.984006    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:10.993961    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:46:10.994036    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:11.004615    9654 logs.go:276] 2 containers: [89d305451507 238b4c800085]
	I0408 04:46:11.004690    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:11.015299    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:46:11.015372    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:11.026158    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:46:11.026234    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:11.036759    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:46:11.036830    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:11.047113    9654 logs.go:276] 0 containers: []
	W0408 04:46:11.047125    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:11.047185    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:11.061112    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:46:11.061128    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:46:11.061134    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:46:11.083668    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:46:11.083679    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:46:11.095558    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:46:11.095568    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:46:11.110427    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:46:11.110436    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:46:11.124548    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:11.124558    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:11.149058    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:46:11.149065    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:11.161528    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:46:11.161538    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:46:11.177040    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:11.177056    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:46:11.196239    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:46:11.196331    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:46:11.212065    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:11.212071    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:11.217670    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:11.217681    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:11.254237    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:46:11.254251    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:46:11.268524    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:46:11.268534    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:46:11.280995    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:46:11.281006    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:46:11.298678    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:46:11.298692    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:46:11.298722    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:46:11.298727    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:46:11.298731    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:46:11.298736    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:46:11.298739    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:46:21.302719    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:26.303210    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:26.303303    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:26.314875    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:46:26.314952    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:26.326915    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:46:26.326995    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:26.338615    9654 logs.go:276] 4 containers: [cd63449895f2 363d73659586 89d305451507 238b4c800085]
	I0408 04:46:26.338735    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:26.350168    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:46:26.350245    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:26.361320    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:46:26.361389    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:26.371937    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:46:26.372006    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:26.382614    9654 logs.go:276] 0 containers: []
	W0408 04:46:26.382623    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:26.382678    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:26.393520    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:46:26.393532    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:46:26.393536    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:46:26.404645    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:46:26.404655    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:46:26.422204    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:46:26.422214    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:26.434124    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:26.434138    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:46:26.452863    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:46:26.452956    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:46:26.469497    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:46:26.469505    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:46:26.481298    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:46:26.481308    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:46:26.495273    9654 logs.go:123] Gathering logs for coredns [cd63449895f2] ...
	I0408 04:46:26.495287    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd63449895f2"
	I0408 04:46:26.507478    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:46:26.507489    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:46:26.519702    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:46:26.519717    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:46:26.534430    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:46:26.534444    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:46:26.547530    9654 logs.go:123] Gathering logs for coredns [363d73659586] ...
	I0408 04:46:26.547541    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363d73659586"
	I0408 04:46:26.559153    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:46:26.559165    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:46:26.574443    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:26.574454    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:26.600616    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:26.600626    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:26.605494    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:26.605501    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:26.646341    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:46:26.646351    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:46:26.646387    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:46:26.646392    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:46:26.646396    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:46:26.646401    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:46:26.646404    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:46:36.650403    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:41.652678    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:41.652882    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:41.669336    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:46:41.669416    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:41.681635    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:46:41.681716    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:41.693531    9654 logs.go:276] 4 containers: [cd63449895f2 363d73659586 89d305451507 238b4c800085]
	I0408 04:46:41.693606    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:41.704058    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:46:41.704123    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:41.714262    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:46:41.714323    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:41.724825    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:46:41.724899    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:41.736455    9654 logs.go:276] 0 containers: []
	W0408 04:46:41.736465    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:41.736521    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:41.746984    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:46:41.747002    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:46:41.747008    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:46:41.763940    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:41.763951    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:41.787676    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:41.787685    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:46:41.806723    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:46:41.806819    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:46:41.823635    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:46:41.823651    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:46:41.840593    9654 logs.go:123] Gathering logs for coredns [cd63449895f2] ...
	I0408 04:46:41.840603    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd63449895f2"
	I0408 04:46:41.853073    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:46:41.853083    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:46:41.865823    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:46:41.865834    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:46:41.878509    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:46:41.878520    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:46:41.894875    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:41.894888    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:41.933280    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:46:41.933291    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:46:41.951933    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:41.951942    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:41.956994    9654 logs.go:123] Gathering logs for coredns [363d73659586] ...
	I0408 04:46:41.957006    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363d73659586"
	I0408 04:46:41.969216    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:46:41.969227    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:46:41.982814    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:46:41.982822    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:46:41.995331    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:46:41.995346    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:42.007782    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:46:42.007792    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:46:42.007817    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:46:42.007822    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:46:42.007826    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:46:42.007830    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:46:42.007833    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:46:52.010644    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:57.013116    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:57.013388    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:57.033490    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:46:57.033579    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:57.048248    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:46:57.048326    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:57.060272    9654 logs.go:276] 4 containers: [cd63449895f2 363d73659586 89d305451507 238b4c800085]
	I0408 04:46:57.060341    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:57.070739    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:46:57.070812    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:57.081235    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:46:57.081300    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:57.094258    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:46:57.094329    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:57.104904    9654 logs.go:276] 0 containers: []
	W0408 04:46:57.104914    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:57.104978    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:57.115527    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:46:57.115542    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:46:57.115548    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:46:57.130011    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:46:57.130022    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:46:57.142330    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:46:57.142339    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:46:57.157002    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:46:57.157011    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:46:57.181695    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:46:57.181704    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:46:57.193220    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:46:57.193229    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:57.209590    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:57.209599    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:46:57.227729    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:46:57.227832    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:46:57.244283    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:57.244290    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:57.279625    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:46:57.279634    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:46:57.315855    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:57.315865    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:57.320455    9654 logs.go:123] Gathering logs for coredns [363d73659586] ...
	I0408 04:46:57.320462    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363d73659586"
	I0408 04:46:57.332138    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:46:57.332148    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:46:57.345639    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:57.345650    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:57.369151    9654 logs.go:123] Gathering logs for coredns [cd63449895f2] ...
	I0408 04:46:57.369159    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd63449895f2"
	I0408 04:46:57.380932    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:46:57.380942    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:46:57.392427    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:46:57.392438    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:46:57.392465    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:46:57.392471    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:46:57.392475    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:46:57.392480    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:46:57.392482    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:47:07.396516    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:12.398726    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:12.398832    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:47:12.409612    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:47:12.409690    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:47:12.420897    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:47:12.420976    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:47:12.431924    9654 logs.go:276] 4 containers: [cd63449895f2 363d73659586 89d305451507 238b4c800085]
	I0408 04:47:12.432000    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:47:12.443019    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:47:12.443084    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:47:12.453198    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:47:12.453263    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:47:12.463531    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:47:12.463593    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:47:12.473895    9654 logs.go:276] 0 containers: []
	W0408 04:47:12.473910    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:47:12.473978    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:47:12.484228    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:47:12.484244    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:47:12.484250    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:47:12.524134    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:47:12.524145    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:47:12.538493    9654 logs.go:123] Gathering logs for coredns [cd63449895f2] ...
	I0408 04:47:12.538503    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd63449895f2"
	I0408 04:47:12.550854    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:47:12.550864    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:47:12.565946    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:47:12.565956    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:47:12.578162    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:47:12.578172    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:47:12.596616    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:47:12.596708    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:47:12.612914    9654 logs.go:123] Gathering logs for coredns [363d73659586] ...
	I0408 04:47:12.612921    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363d73659586"
	I0408 04:47:12.625897    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:47:12.625907    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:47:12.643220    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:47:12.643232    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:47:12.655145    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:47:12.655159    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:47:12.659390    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:47:12.659396    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:47:12.672834    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:47:12.672845    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:47:12.684959    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:47:12.684970    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:47:12.704419    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:47:12.704430    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:47:12.728057    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:47:12.728066    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:47:12.748289    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:47:12.748299    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:47:12.748326    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:47:12.748331    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:47:12.748334    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:47:12.748338    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:47:12.748341    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:47:22.752353    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:27.754528    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:27.754737    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:47:27.769644    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:47:27.769730    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:47:27.781775    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:47:27.781856    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:47:27.800600    9654 logs.go:276] 4 containers: [cd63449895f2 363d73659586 89d305451507 238b4c800085]
	I0408 04:47:27.800676    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:47:27.811314    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:47:27.811387    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:47:27.821787    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:47:27.821855    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:47:27.831880    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:47:27.831955    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:47:27.842300    9654 logs.go:276] 0 containers: []
	W0408 04:47:27.842312    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:47:27.842374    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:47:27.860100    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:47:27.860117    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:47:27.860122    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:47:27.872815    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:47:27.872826    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:47:27.884904    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:47:27.884914    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:47:27.904171    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:47:27.904264    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:47:27.921048    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:47:27.921063    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:47:27.926264    9654 logs.go:123] Gathering logs for coredns [363d73659586] ...
	I0408 04:47:27.926272    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363d73659586"
	I0408 04:47:27.938613    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:47:27.938624    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:47:27.953885    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:47:27.953896    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:47:27.968300    9654 logs.go:123] Gathering logs for coredns [cd63449895f2] ...
	I0408 04:47:27.968315    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd63449895f2"
	I0408 04:47:27.980079    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:47:27.980089    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:47:27.991866    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:47:27.991877    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:47:28.010189    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:47:28.010201    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:47:28.049436    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:47:28.049452    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:47:28.064205    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:47:28.064218    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:47:28.076146    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:47:28.076160    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:47:28.101501    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:47:28.101511    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:47:28.113541    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:47:28.113554    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:47:28.113579    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:47:28.113584    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:47:28.113597    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:47:28.113694    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:47:28.113720    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:47:38.116268    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:43.118540    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:43.118921    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:47:43.161860    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:47:43.161999    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:47:43.182742    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:47:43.182843    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:47:43.198343    9654 logs.go:276] 4 containers: [cd63449895f2 363d73659586 89d305451507 238b4c800085]
	I0408 04:47:43.198427    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:47:43.210720    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:47:43.210796    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:47:43.222511    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:47:43.222581    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:47:43.233472    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:47:43.233539    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:47:43.244814    9654 logs.go:276] 0 containers: []
	W0408 04:47:43.244825    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:47:43.244881    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:47:43.255212    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:47:43.255228    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:47:43.255234    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:47:43.259847    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:47:43.259854    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:47:43.294623    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:47:43.294635    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:47:43.308833    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:47:43.308846    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:47:43.321513    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:47:43.321526    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:47:43.333704    9654 logs.go:123] Gathering logs for coredns [363d73659586] ...
	I0408 04:47:43.333714    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363d73659586"
	I0408 04:47:43.345487    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:47:43.345496    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:47:43.367034    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:47:43.367048    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:47:43.378958    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:47:43.378970    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:47:43.399583    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:47:43.399678    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:47:43.415668    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:47:43.415676    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:47:43.430028    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:47:43.430040    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:47:43.443815    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:47:43.443829    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:47:43.460967    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:47:43.460978    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:47:43.485934    9654 logs.go:123] Gathering logs for coredns [cd63449895f2] ...
	I0408 04:47:43.485946    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd63449895f2"
	I0408 04:47:43.498171    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:47:43.498182    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:47:43.510012    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:47:43.510022    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:47:43.510052    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:47:43.510060    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:47:43.510065    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:47:43.510069    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:47:43.510072    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:47:53.514156    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:58.516392    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:58.516605    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:47:58.541580    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:47:58.541680    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:47:58.556243    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:47:58.556327    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:47:58.568274    9654 logs.go:276] 4 containers: [cd63449895f2 363d73659586 89d305451507 238b4c800085]
	I0408 04:47:58.568356    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:47:58.579426    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:47:58.579524    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:47:58.590192    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:47:58.590267    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:47:58.600976    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:47:58.601070    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:47:58.622955    9654 logs.go:276] 0 containers: []
	W0408 04:47:58.622967    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:47:58.623029    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:47:58.645410    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:47:58.645428    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:47:58.645434    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:47:58.650214    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:47:58.650222    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:47:58.662188    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:47:58.662200    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:47:58.698984    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:47:58.698996    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:47:58.713605    9654 logs.go:123] Gathering logs for coredns [cd63449895f2] ...
	I0408 04:47:58.713615    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd63449895f2"
	I0408 04:47:58.725741    9654 logs.go:123] Gathering logs for coredns [363d73659586] ...
	I0408 04:47:58.725754    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363d73659586"
	I0408 04:47:58.741761    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:47:58.741772    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:47:58.753740    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:47:58.753752    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:47:58.768305    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:47:58.768316    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:47:58.779822    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:47:58.779833    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:47:58.794901    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:47:58.794914    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:47:58.817507    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:47:58.817514    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:47:58.829423    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:47:58.829435    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:47:58.848460    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:47:58.848551    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:47:58.864543    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:47:58.864550    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:47:58.879039    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:47:58.879049    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:47:58.896328    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:47:58.896339    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:47:58.896365    9654 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:47:58.896370    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:47:58.896388    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	  Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:47:58.896395    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:47:58.896444    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:48:08.900435    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:48:13.902693    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:48:13.908377    9654 out.go:177] 
	W0408 04:48:13.913261    9654 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0408 04:48:13.913271    9654 out.go:239] * 
	* 
	W0408 04:48:13.913942    9654 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:48:13.924208    9654 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-835000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-04-08 04:48:13.98286 -0700 PDT m=+1320.617295417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-835000 -n running-upgrade-835000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-835000 -n running-upgrade-835000: exit status 2 (15.673144667s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-835000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p force-systemd-flag-431000          | force-systemd-flag-431000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:38 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | force-systemd-env-907000              | force-systemd-env-907000  | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:38 PDT |                     |
	|         | ssh docker info --format              |                           |         |                |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |                |                     |                     |
	| delete  | -p force-systemd-env-907000           | force-systemd-env-907000  | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:38 PDT | 08 Apr 24 04:38 PDT |
	| start   | -p docker-flags-886000                | docker-flags-886000       | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:38 PDT |                     |
	|         | --cache-images=false                  |                           |         |                |                     |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --install-addons=false                |                           |         |                |                     |                     |
	|         | --wait=false                          |                           |         |                |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |                |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |                |                     |                     |
	|         | --docker-opt=debug                    |                           |         |                |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | force-systemd-flag-431000             | force-systemd-flag-431000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:38 PDT |                     |
	|         | ssh docker info --format              |                           |         |                |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |                |                     |                     |
	| delete  | -p force-systemd-flag-431000          | force-systemd-flag-431000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:38 PDT | 08 Apr 24 04:38 PDT |
	| start   | -p cert-expiration-040000             | cert-expiration-040000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:38 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | docker-flags-886000 ssh               | docker-flags-886000       | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:38 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |                |                     |                     |
	|         | --property=Environment                |                           |         |                |                     |                     |
	|         | --no-pager                            |                           |         |                |                     |                     |
	| ssh     | docker-flags-886000 ssh               | docker-flags-886000       | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:38 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |                |                     |                     |
	|         | --property=ExecStart                  |                           |         |                |                     |                     |
	|         | --no-pager                            |                           |         |                |                     |                     |
	| delete  | -p docker-flags-886000                | docker-flags-886000       | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:38 PDT | 08 Apr 24 04:38 PDT |
	| start   | -p cert-options-519000                | cert-options-519000       | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:38 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |                |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |                |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |                |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | cert-options-519000 ssh               | cert-options-519000       | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:38 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |                |                     |                     |
	| ssh     | -p cert-options-519000 -- sudo        | cert-options-519000       | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:38 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |                |                     |                     |
	| delete  | -p cert-options-519000                | cert-options-519000       | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:38 PDT | 08 Apr 24 04:38 PDT |
	| start   | -p running-upgrade-835000             | minikube                  | jenkins | v1.26.0        | 08 Apr 24 04:38 PDT | 08 Apr 24 04:39 PDT |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |                |                     |                     |
	| start   | -p running-upgrade-835000             | running-upgrade-835000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:39 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| start   | -p cert-expiration-040000             | cert-expiration-040000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:41 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| delete  | -p cert-expiration-040000             | cert-expiration-040000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:41 PDT | 08 Apr 24 04:41 PDT |
	| start   | -p kubernetes-upgrade-305000          | kubernetes-upgrade-305000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:41 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-305000          | kubernetes-upgrade-305000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:41 PDT | 08 Apr 24 04:41 PDT |
	| start   | -p kubernetes-upgrade-305000          | kubernetes-upgrade-305000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:41 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0     |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| delete  | -p kubernetes-upgrade-305000          | kubernetes-upgrade-305000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:42 PDT | 08 Apr 24 04:42 PDT |
	| start   | -p stopped-upgrade-462000             | minikube                  | jenkins | v1.26.0        | 08 Apr 24 04:42 PDT | 08 Apr 24 04:42 PDT |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |                |                     |                     |
	| stop    | stopped-upgrade-462000 stop           | minikube                  | jenkins | v1.26.0        | 08 Apr 24 04:42 PDT | 08 Apr 24 04:42 PDT |
	| start   | -p stopped-upgrade-462000             | stopped-upgrade-462000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:42 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 04:42:56
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 04:42:56.319716    9805 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:42:56.319869    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:42:56.319873    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:42:56.319876    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:42:56.320027    9805 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:42:56.321175    9805 out.go:298] Setting JSON to false
	I0408 04:42:56.340427    9805 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6145,"bootTime":1712570431,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:42:56.340512    9805 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:42:56.344894    9805 out.go:177] * [stopped-upgrade-462000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:42:56.352854    9805 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:42:56.355899    9805 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:42:56.352898    9805 notify.go:220] Checking for updates...
	I0408 04:42:56.361806    9805 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:42:56.364849    9805 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:42:56.367876    9805 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:42:56.370908    9805 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:42:56.374115    9805 config.go:182] Loaded profile config "stopped-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 04:42:56.377859    9805 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0408 04:42:56.380774    9805 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:42:56.384821    9805 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 04:42:56.391861    9805 start.go:297] selected driver: qemu2
	I0408 04:42:56.391868    9805 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51448 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0408 04:42:56.391938    9805 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:42:56.394721    9805 cni.go:84] Creating CNI manager for ""
	I0408 04:42:56.394739    9805 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:42:56.394767    9805 start.go:340] cluster config:
	{Name:stopped-upgrade-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51448 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0408 04:42:56.394819    9805 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:42:56.401794    9805 out.go:177] * Starting "stopped-upgrade-462000" primary control-plane node in "stopped-upgrade-462000" cluster
	I0408 04:42:56.405777    9805 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0408 04:42:56.405807    9805 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0408 04:42:56.405819    9805 cache.go:56] Caching tarball of preloaded images
	I0408 04:42:56.405901    9805 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:42:56.405908    9805 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0408 04:42:56.405971    9805 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/config.json ...
	I0408 04:42:56.406552    9805 start.go:360] acquireMachinesLock for stopped-upgrade-462000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:42:56.406589    9805 start.go:364] duration metric: took 28.125µs to acquireMachinesLock for "stopped-upgrade-462000"
	I0408 04:42:56.406598    9805 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:42:56.406604    9805 fix.go:54] fixHost starting: 
	I0408 04:42:56.406720    9805 fix.go:112] recreateIfNeeded on stopped-upgrade-462000: state=Stopped err=<nil>
	W0408 04:42:56.406729    9805 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:42:56.414873    9805 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-462000" ...
	I0408 04:42:55.165129    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:42:56.418050    9805 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/stopped-upgrade-462000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/stopped-upgrade-462000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/stopped-upgrade-462000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51414-:22,hostfwd=tcp::51415-:2376,hostname=stopped-upgrade-462000 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/stopped-upgrade-462000/disk.qcow2
	I0408 04:42:56.463095    9805 main.go:141] libmachine: STDOUT: 
	I0408 04:42:56.463129    9805 main.go:141] libmachine: STDERR: 
	I0408 04:42:56.463136    9805 main.go:141] libmachine: Waiting for VM to start (ssh -p 51414 docker@127.0.0.1)...
	I0408 04:43:00.167900    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:43:00.168330    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:43:00.211295    9654 logs.go:276] 2 containers: [3f2f145a8f16 fc91683e307d]
	I0408 04:43:00.211438    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:43:00.234445    9654 logs.go:276] 2 containers: [bc0b9c25a9da 8854a7b5f3ef]
	I0408 04:43:00.234571    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:43:00.250896    9654 logs.go:276] 1 containers: [36847e65ba03]
	I0408 04:43:00.250969    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:43:00.263013    9654 logs.go:276] 2 containers: [f52f289e8112 bca1fee77bc3]
	I0408 04:43:00.263088    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:43:00.274138    9654 logs.go:276] 1 containers: [9d86accc4f3c]
	I0408 04:43:00.274218    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:43:00.286039    9654 logs.go:276] 2 containers: [9220cf4363b0 90e3f9b0faaa]
	I0408 04:43:00.286129    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:43:00.296927    9654 logs.go:276] 0 containers: []
	W0408 04:43:00.296936    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:43:00.296988    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:43:00.307253    9654 logs.go:276] 2 containers: [753f3c118640 f9f1de9506cf]
	I0408 04:43:00.307272    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:43:00.307277    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:43:00.343700    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:43:00.343791    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:43:00.344310    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:43:00.344314    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:43:00.379476    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:43:00.379490    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:43:00.391287    9654 logs.go:123] Gathering logs for kube-scheduler [bca1fee77bc3] ...
	I0408 04:43:00.391301    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bca1fee77bc3"
	I0408 04:43:00.418342    9654 logs.go:123] Gathering logs for kube-proxy [9d86accc4f3c] ...
	I0408 04:43:00.418354    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d86accc4f3c"
	I0408 04:43:00.431933    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:43:00.431943    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:43:00.436695    9654 logs.go:123] Gathering logs for kube-apiserver [3f2f145a8f16] ...
	I0408 04:43:00.436705    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f145a8f16"
	I0408 04:43:00.453496    9654 logs.go:123] Gathering logs for kube-apiserver [fc91683e307d] ...
	I0408 04:43:00.453507    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc91683e307d"
	I0408 04:43:00.466009    9654 logs.go:123] Gathering logs for etcd [8854a7b5f3ef] ...
	I0408 04:43:00.466025    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8854a7b5f3ef"
	I0408 04:43:00.481490    9654 logs.go:123] Gathering logs for coredns [36847e65ba03] ...
	I0408 04:43:00.481501    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36847e65ba03"
	I0408 04:43:00.492940    9654 logs.go:123] Gathering logs for kube-scheduler [f52f289e8112] ...
	I0408 04:43:00.492950    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f52f289e8112"
	I0408 04:43:00.505558    9654 logs.go:123] Gathering logs for kube-controller-manager [90e3f9b0faaa] ...
	I0408 04:43:00.505572    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90e3f9b0faaa"
	I0408 04:43:00.516746    9654 logs.go:123] Gathering logs for etcd [bc0b9c25a9da] ...
	I0408 04:43:00.516757    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0b9c25a9da"
	I0408 04:43:00.534835    9654 logs.go:123] Gathering logs for kube-controller-manager [9220cf4363b0] ...
	I0408 04:43:00.534848    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9220cf4363b0"
	I0408 04:43:00.552277    9654 logs.go:123] Gathering logs for storage-provisioner [753f3c118640] ...
	I0408 04:43:00.552288    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753f3c118640"
	I0408 04:43:00.564129    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:43:00.564140    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:43:00.587878    9654 logs.go:123] Gathering logs for storage-provisioner [f9f1de9506cf] ...
	I0408 04:43:00.587885    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f1de9506cf"
	I0408 04:43:00.599526    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:43:00.599535    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:43:00.599566    9654 out.go:239] X Problems detected in kubelet:
	W0408 04:43:00.599572    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:43:00.599576    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:43:00.599580    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:43:00.599584    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:43:10.602420    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:43:15.604320    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:43:15.604532    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:43:15.617227    9654 logs.go:276] 2 containers: [3f2f145a8f16 fc91683e307d]
	I0408 04:43:15.617308    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:43:15.629049    9654 logs.go:276] 2 containers: [bc0b9c25a9da 8854a7b5f3ef]
	I0408 04:43:15.629130    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:43:15.640347    9654 logs.go:276] 1 containers: [36847e65ba03]
	I0408 04:43:15.640428    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:43:15.651122    9654 logs.go:276] 2 containers: [f52f289e8112 bca1fee77bc3]
	I0408 04:43:15.651199    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:43:15.661584    9654 logs.go:276] 1 containers: [9d86accc4f3c]
	I0408 04:43:15.661656    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:43:15.672564    9654 logs.go:276] 2 containers: [9220cf4363b0 90e3f9b0faaa]
	I0408 04:43:15.672631    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:43:15.682585    9654 logs.go:276] 0 containers: []
	W0408 04:43:15.682598    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:43:15.682650    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:43:15.694781    9654 logs.go:276] 2 containers: [753f3c118640 f9f1de9506cf]
	I0408 04:43:15.694813    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:43:15.694819    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:43:15.732648    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:43:15.732739    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:43:15.733243    9654 logs.go:123] Gathering logs for coredns [36847e65ba03] ...
	I0408 04:43:15.733250    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36847e65ba03"
	I0408 04:43:15.744538    9654 logs.go:123] Gathering logs for storage-provisioner [f9f1de9506cf] ...
	I0408 04:43:15.744551    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f1de9506cf"
	I0408 04:43:15.760248    9654 logs.go:123] Gathering logs for storage-provisioner [753f3c118640] ...
	I0408 04:43:15.760264    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753f3c118640"
	I0408 04:43:15.771639    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:43:15.771653    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:43:15.795255    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:43:15.795262    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:43:15.799779    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:43:15.799787    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:43:15.833034    9654 logs.go:123] Gathering logs for kube-apiserver [3f2f145a8f16] ...
	I0408 04:43:15.833042    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f145a8f16"
	I0408 04:43:15.847446    9654 logs.go:123] Gathering logs for kube-proxy [9d86accc4f3c] ...
	I0408 04:43:15.847455    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d86accc4f3c"
	I0408 04:43:15.863069    9654 logs.go:123] Gathering logs for kube-controller-manager [9220cf4363b0] ...
	I0408 04:43:15.863080    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9220cf4363b0"
	I0408 04:43:15.887449    9654 logs.go:123] Gathering logs for kube-controller-manager [90e3f9b0faaa] ...
	I0408 04:43:15.887459    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90e3f9b0faaa"
	I0408 04:43:15.899399    9654 logs.go:123] Gathering logs for kube-apiserver [fc91683e307d] ...
	I0408 04:43:15.899413    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc91683e307d"
	I0408 04:43:15.914463    9654 logs.go:123] Gathering logs for etcd [8854a7b5f3ef] ...
	I0408 04:43:15.914475    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8854a7b5f3ef"
	I0408 04:43:15.930864    9654 logs.go:123] Gathering logs for kube-scheduler [f52f289e8112] ...
	I0408 04:43:15.930874    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f52f289e8112"
	I0408 04:43:15.943689    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:43:15.943701    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:43:15.955630    9654 logs.go:123] Gathering logs for etcd [bc0b9c25a9da] ...
	I0408 04:43:15.955642    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0b9c25a9da"
	I0408 04:43:15.969498    9654 logs.go:123] Gathering logs for kube-scheduler [bca1fee77bc3] ...
	I0408 04:43:15.969507    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bca1fee77bc3"
	I0408 04:43:15.980570    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:43:15.980585    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:43:15.980615    9654 out.go:239] X Problems detected in kubelet:
	W0408 04:43:15.980620    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:43:15.980624    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:43:15.980627    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:43:15.980630    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:43:16.648955    9805 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/config.json ...
	I0408 04:43:16.649676    9805 machine.go:94] provisionDockerMachine start ...
	I0408 04:43:16.649793    9805 main.go:141] libmachine: Using SSH client type: native
	I0408 04:43:16.650172    9805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102701c80] 0x1027044e0 <nil>  [] 0s} localhost 51414 <nil> <nil>}
	I0408 04:43:16.650187    9805 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 04:43:16.738907    9805 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 04:43:16.738944    9805 buildroot.go:166] provisioning hostname "stopped-upgrade-462000"
	I0408 04:43:16.739050    9805 main.go:141] libmachine: Using SSH client type: native
	I0408 04:43:16.739252    9805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102701c80] 0x1027044e0 <nil>  [] 0s} localhost 51414 <nil> <nil>}
	I0408 04:43:16.739261    9805 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-462000 && echo "stopped-upgrade-462000" | sudo tee /etc/hostname
	I0408 04:43:16.822121    9805 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-462000
	
	I0408 04:43:16.822190    9805 main.go:141] libmachine: Using SSH client type: native
	I0408 04:43:16.822335    9805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102701c80] 0x1027044e0 <nil>  [] 0s} localhost 51414 <nil> <nil>}
	I0408 04:43:16.822346    9805 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-462000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-462000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-462000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 04:43:16.901017    9805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 04:43:16.901032    9805 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18588-7343/.minikube CaCertPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18588-7343/.minikube}
	I0408 04:43:16.901042    9805 buildroot.go:174] setting up certificates
	I0408 04:43:16.901048    9805 provision.go:84] configureAuth start
	I0408 04:43:16.901053    9805 provision.go:143] copyHostCerts
	I0408 04:43:16.901125    9805 exec_runner.go:144] found /Users/jenkins/minikube-integration/18588-7343/.minikube/cert.pem, removing ...
	I0408 04:43:16.901134    9805 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18588-7343/.minikube/cert.pem
	I0408 04:43:16.901292    9805 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18588-7343/.minikube/cert.pem (1123 bytes)
	I0408 04:43:16.901537    9805 exec_runner.go:144] found /Users/jenkins/minikube-integration/18588-7343/.minikube/key.pem, removing ...
	I0408 04:43:16.901544    9805 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18588-7343/.minikube/key.pem
	I0408 04:43:16.902215    9805 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18588-7343/.minikube/key.pem (1679 bytes)
	I0408 04:43:16.902392    9805 exec_runner.go:144] found /Users/jenkins/minikube-integration/18588-7343/.minikube/ca.pem, removing ...
	I0408 04:43:16.902398    9805 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18588-7343/.minikube/ca.pem
	I0408 04:43:16.902468    9805 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18588-7343/.minikube/ca.pem (1078 bytes)
	I0408 04:43:16.902580    9805 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-462000 san=[127.0.0.1 localhost minikube stopped-upgrade-462000]
	I0408 04:43:16.968000    9805 provision.go:177] copyRemoteCerts
	I0408 04:43:16.968030    9805 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 04:43:16.968041    9805 sshutil.go:53] new ssh client: &{IP:localhost Port:51414 SSHKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/stopped-upgrade-462000/id_rsa Username:docker}
	I0408 04:43:17.005543    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0408 04:43:17.012653    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0408 04:43:17.019468    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 04:43:17.025916    9805 provision.go:87] duration metric: took 124.859875ms to configureAuth
	I0408 04:43:17.025926    9805 buildroot.go:189] setting minikube options for container-runtime
	I0408 04:43:17.026025    9805 config.go:182] Loaded profile config "stopped-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 04:43:17.026066    9805 main.go:141] libmachine: Using SSH client type: native
	I0408 04:43:17.026190    9805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102701c80] 0x1027044e0 <nil>  [] 0s} localhost 51414 <nil> <nil>}
	I0408 04:43:17.026197    9805 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0408 04:43:17.099186    9805 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0408 04:43:17.099195    9805 buildroot.go:70] root file system type: tmpfs
	I0408 04:43:17.099247    9805 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0408 04:43:17.099294    9805 main.go:141] libmachine: Using SSH client type: native
	I0408 04:43:17.099404    9805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102701c80] 0x1027044e0 <nil>  [] 0s} localhost 51414 <nil> <nil>}
	I0408 04:43:17.099437    9805 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0408 04:43:17.171706    9805 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0408 04:43:17.171764    9805 main.go:141] libmachine: Using SSH client type: native
	I0408 04:43:17.171875    9805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102701c80] 0x1027044e0 <nil>  [] 0s} localhost 51414 <nil> <nil>}
	I0408 04:43:17.171883    9805 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0408 04:43:17.561062    9805 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0408 04:43:17.561076    9805 machine.go:97] duration metric: took 911.401041ms to provisionDockerMachine
	I0408 04:43:17.561083    9805 start.go:293] postStartSetup for "stopped-upgrade-462000" (driver="qemu2")
	I0408 04:43:17.561090    9805 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 04:43:17.561146    9805 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 04:43:17.561156    9805 sshutil.go:53] new ssh client: &{IP:localhost Port:51414 SSHKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/stopped-upgrade-462000/id_rsa Username:docker}
	I0408 04:43:17.598415    9805 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 04:43:17.599687    9805 info.go:137] Remote host: Buildroot 2021.02.12
	I0408 04:43:17.599696    9805 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18588-7343/.minikube/addons for local assets ...
	I0408 04:43:17.599768    9805 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18588-7343/.minikube/files for local assets ...
	I0408 04:43:17.599853    9805 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18588-7343/.minikube/files/etc/ssl/certs/77492.pem -> 77492.pem in /etc/ssl/certs
	I0408 04:43:17.599941    9805 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 04:43:17.602289    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/files/etc/ssl/certs/77492.pem --> /etc/ssl/certs/77492.pem (1708 bytes)
	I0408 04:43:17.609401    9805 start.go:296] duration metric: took 48.313334ms for postStartSetup
	I0408 04:43:17.609414    9805 fix.go:56] duration metric: took 21.203109584s for fixHost
	I0408 04:43:17.609445    9805 main.go:141] libmachine: Using SSH client type: native
	I0408 04:43:17.609547    9805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102701c80] 0x1027044e0 <nil>  [] 0s} localhost 51414 <nil> <nil>}
	I0408 04:43:17.609551    9805 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 04:43:17.679764    9805 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712576598.176383754
	
	I0408 04:43:17.679776    9805 fix.go:216] guest clock: 1712576598.176383754
	I0408 04:43:17.679781    9805 fix.go:229] Guest: 2024-04-08 04:43:18.176383754 -0700 PDT Remote: 2024-04-08 04:43:17.609415 -0700 PDT m=+21.323518418 (delta=566.968754ms)
	I0408 04:43:17.679795    9805 fix.go:200] guest clock delta is within tolerance: 566.968754ms
	I0408 04:43:17.679802    9805 start.go:83] releasing machines lock for "stopped-upgrade-462000", held for 21.273506416s
	I0408 04:43:17.679880    9805 ssh_runner.go:195] Run: cat /version.json
	I0408 04:43:17.679890    9805 sshutil.go:53] new ssh client: &{IP:localhost Port:51414 SSHKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/stopped-upgrade-462000/id_rsa Username:docker}
	I0408 04:43:17.679898    9805 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 04:43:17.679925    9805 sshutil.go:53] new ssh client: &{IP:localhost Port:51414 SSHKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/stopped-upgrade-462000/id_rsa Username:docker}
	W0408 04:43:17.680562    9805 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51526->127.0.0.1:51414: write: broken pipe
	I0408 04:43:17.680584    9805 retry.go:31] will retry after 303.712287ms: ssh: handshake failed: write tcp 127.0.0.1:51526->127.0.0.1:51414: write: broken pipe
	W0408 04:43:18.039702    9805 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0408 04:43:18.039906    9805 ssh_runner.go:195] Run: systemctl --version
	I0408 04:43:18.043748    9805 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 04:43:18.047293    9805 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 04:43:18.047354    9805 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0408 04:43:18.053220    9805 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0408 04:43:18.062030    9805 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 04:43:18.062050    9805 start.go:494] detecting cgroup driver to use...
	I0408 04:43:18.062189    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 04:43:18.072825    9805 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0408 04:43:18.077050    9805 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 04:43:18.081264    9805 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 04:43:18.081297    9805 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 04:43:18.085220    9805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 04:43:18.089002    9805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 04:43:18.092320    9805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 04:43:18.095277    9805 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 04:43:18.098085    9805 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 04:43:18.101083    9805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 04:43:18.103819    9805 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 04:43:18.106503    9805 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 04:43:18.109435    9805 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 04:43:18.112433    9805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 04:43:18.193970    9805 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 04:43:18.200589    9805 start.go:494] detecting cgroup driver to use...
	I0408 04:43:18.200646    9805 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0408 04:43:18.207224    9805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 04:43:18.211708    9805 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 04:43:18.217801    9805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 04:43:18.223136    9805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 04:43:18.227677    9805 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0408 04:43:18.269267    9805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 04:43:18.274150    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 04:43:18.279468    9805 ssh_runner.go:195] Run: which cri-dockerd
	I0408 04:43:18.280613    9805 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0408 04:43:18.283076    9805 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0408 04:43:18.287902    9805 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0408 04:43:18.356984    9805 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0408 04:43:18.423769    9805 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0408 04:43:18.423836    9805 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0408 04:43:18.428922    9805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 04:43:18.505126    9805 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 04:43:19.646178    9805 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.141051584s)
	I0408 04:43:19.646243    9805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0408 04:43:19.651360    9805 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0408 04:43:19.657795    9805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 04:43:19.662621    9805 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0408 04:43:19.743836    9805 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0408 04:43:19.819257    9805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 04:43:19.896306    9805 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0408 04:43:19.901601    9805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 04:43:19.906536    9805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 04:43:19.989012    9805 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0408 04:43:20.027300    9805 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0408 04:43:20.027393    9805 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0408 04:43:20.029652    9805 start.go:562] Will wait 60s for crictl version
	I0408 04:43:20.029701    9805 ssh_runner.go:195] Run: which crictl
	I0408 04:43:20.031047    9805 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 04:43:20.046486    9805 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0408 04:43:20.046555    9805 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 04:43:20.064351    9805 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 04:43:20.084419    9805 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0408 04:43:20.084531    9805 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0408 04:43:20.085847    9805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 04:43:20.089863    9805 kubeadm.go:877] updating cluster {Name:stopped-upgrade-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51448 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0408 04:43:20.089908    9805 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0408 04:43:20.089947    9805 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0408 04:43:20.100663    9805 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0408 04:43:20.100673    9805 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0408 04:43:20.100726    9805 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0408 04:43:20.104014    9805 ssh_runner.go:195] Run: which lz4
	I0408 04:43:20.105310    9805 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 04:43:20.106482    9805 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 04:43:20.106492    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0408 04:43:20.784999    9805 docker.go:649] duration metric: took 679.727084ms to copy over tarball
	I0408 04:43:20.785071    9805 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 04:43:21.956355    9805 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.171285208s)
	I0408 04:43:21.956368    9805 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 04:43:21.971920    9805 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0408 04:43:21.974808    9805 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0408 04:43:21.979819    9805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 04:43:22.061506    9805 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 04:43:23.707525    9805 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.646024292s)
	I0408 04:43:23.707614    9805 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0408 04:43:23.723814    9805 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0408 04:43:23.723826    9805 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0408 04:43:23.723832    9805 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 04:43:23.729864    9805 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 04:43:23.729889    9805 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0408 04:43:23.729976    9805 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0408 04:43:23.730334    9805 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0408 04:43:23.730482    9805 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 04:43:23.730516    9805 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0408 04:43:23.730632    9805 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 04:43:23.731033    9805 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0408 04:43:23.740829    9805 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0408 04:43:23.740881    9805 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 04:43:23.740901    9805 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0408 04:43:23.740974    9805 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0408 04:43:23.741023    9805 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0408 04:43:23.741043    9805 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 04:43:23.741554    9805 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0408 04:43:23.741552    9805 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 04:43:24.160107    9805 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0408 04:43:24.171153    9805 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0408 04:43:24.171176    9805 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0408 04:43:24.171231    9805 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0408 04:43:24.181285    9805 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0408 04:43:24.181393    9805 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0408 04:43:24.183101    9805 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0408 04:43:24.183113    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0408 04:43:24.191743    9805 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0408 04:43:24.191752    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0408 04:43:24.191790    9805 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0408 04:43:24.191900    9805 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0408 04:43:24.209354    9805 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0408 04:43:24.216905    9805 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0408 04:43:24.228045    9805 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0408 04:43:24.228075    9805 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0408 04:43:24.228086    9805 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0408 04:43:24.228093    9805 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 04:43:24.228097    9805 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0408 04:43:24.228152    9805 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0408 04:43:24.228152    9805 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0408 04:43:24.236827    9805 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0408 04:43:24.236846    9805 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0408 04:43:24.236901    9805 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0408 04:43:24.246627    9805 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0408 04:43:24.265773    9805 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0408 04:43:24.265908    9805 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0408 04:43:24.266005    9805 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0408 04:43:24.267018    9805 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0408 04:43:24.267062    9805 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0408 04:43:24.267075    9805 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0408 04:43:24.267111    9805 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0408 04:43:24.268071    9805 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0408 04:43:24.268082    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0408 04:43:24.280515    9805 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0408 04:43:24.286280    9805 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0408 04:43:24.295699    9805 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 04:43:24.309670    9805 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0408 04:43:24.309683    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0408 04:43:24.312145    9805 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0408 04:43:24.312164    9805 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0408 04:43:24.312223    9805 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0408 04:43:24.325262    9805 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0408 04:43:24.325285    9805 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 04:43:24.325343    9805 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 04:43:24.368821    9805 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0408 04:43:24.368845    9805 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0408 04:43:24.368866    9805 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0408 04:43:24.368946    9805 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0408 04:43:24.370393    9805 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0408 04:43:24.370404    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0408 04:43:24.529041    9805 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0408 04:43:24.529055    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0408 04:43:24.592706    9805 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0408 04:43:24.592821    9805 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 04:43:24.672355    9805 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0408 04:43:24.672379    9805 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0408 04:43:24.672400    9805 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 04:43:24.672469    9805 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 04:43:24.686489    9805 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0408 04:43:24.686590    9805 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0408 04:43:24.688067    9805 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0408 04:43:24.688078    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0408 04:43:24.712292    9805 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0408 04:43:24.712308    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0408 04:43:24.944648    9805 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0408 04:43:24.944687    9805 cache_images.go:92] duration metric: took 1.220864791s to LoadCachedImages
	W0408 04:43:24.944730    9805 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0408 04:43:24.944735    9805 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0408 04:43:24.944803    9805 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-462000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 04:43:24.944867    9805 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0408 04:43:24.958115    9805 cni.go:84] Creating CNI manager for ""
	I0408 04:43:24.958127    9805 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:43:24.958132    9805 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 04:43:24.958140    9805 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-462000 NodeName:stopped-upgrade-462000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 04:43:24.958204    9805 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-462000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 04:43:24.958256    9805 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0408 04:43:24.961601    9805 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 04:43:24.961631    9805 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 04:43:24.964754    9805 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0408 04:43:24.969946    9805 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 04:43:24.974956    9805 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0408 04:43:24.980200    9805 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0408 04:43:24.981443    9805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 04:43:24.985096    9805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 04:43:25.048630    9805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 04:43:25.054677    9805 certs.go:68] Setting up /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000 for IP: 10.0.2.15
	I0408 04:43:25.054685    9805 certs.go:194] generating shared ca certs ...
	I0408 04:43:25.054694    9805 certs.go:226] acquiring lock for ca certs: {Name:mkf571f644c202bb973f8b5774e57a066bda7c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:43:25.054849    9805 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/ca.key
	I0408 04:43:25.054896    9805 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/proxy-client-ca.key
	I0408 04:43:25.054901    9805 certs.go:256] generating profile certs ...
	I0408 04:43:25.054973    9805 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/client.key
	I0408 04:43:25.054992    9805 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.key.e7e0aef3
	I0408 04:43:25.055002    9805 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.crt.e7e0aef3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0408 04:43:25.195336    9805 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.crt.e7e0aef3 ...
	I0408 04:43:25.195352    9805 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.crt.e7e0aef3: {Name:mkeaa4f5964f1e35c4e71960ef905304f13cde2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:43:25.195669    9805 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.key.e7e0aef3 ...
	I0408 04:43:25.195674    9805 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.key.e7e0aef3: {Name:mk86a190f057fbd339413ab3ccc5a7ca36f4036e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:43:25.195825    9805 certs.go:381] copying /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.crt.e7e0aef3 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.crt
	I0408 04:43:25.195960    9805 certs.go:385] copying /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.key.e7e0aef3 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.key
	I0408 04:43:25.196110    9805 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/proxy-client.key
	I0408 04:43:25.196249    9805 certs.go:484] found cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/7749.pem (1338 bytes)
	W0408 04:43:25.196280    9805 certs.go:480] ignoring /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/7749_empty.pem, impossibly tiny 0 bytes
	I0408 04:43:25.196285    9805 certs.go:484] found cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca-key.pem (1675 bytes)
	I0408 04:43:25.196310    9805 certs.go:484] found cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem (1078 bytes)
	I0408 04:43:25.196336    9805 certs.go:484] found cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem (1123 bytes)
	I0408 04:43:25.196362    9805 certs.go:484] found cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/key.pem (1679 bytes)
	I0408 04:43:25.196412    9805 certs.go:484] found cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/files/etc/ssl/certs/77492.pem (1708 bytes)
	I0408 04:43:25.196752    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 04:43:25.204060    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0408 04:43:25.210938    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 04:43:25.217756    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 04:43:25.225141    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0408 04:43:25.232512    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 04:43:25.239113    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 04:43:25.245844    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 04:43:25.253028    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/files/etc/ssl/certs/77492.pem --> /usr/share/ca-certificates/77492.pem (1708 bytes)
	I0408 04:43:25.259891    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 04:43:25.266395    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/7749.pem --> /usr/share/ca-certificates/7749.pem (1338 bytes)
	I0408 04:43:25.273459    9805 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 04:43:25.278624    9805 ssh_runner.go:195] Run: openssl version
	I0408 04:43:25.280449    9805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 04:43:25.283361    9805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 04:43:25.284850    9805 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:39 /usr/share/ca-certificates/minikubeCA.pem
	I0408 04:43:25.284882    9805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 04:43:25.286723    9805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 04:43:25.289975    9805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7749.pem && ln -fs /usr/share/ca-certificates/7749.pem /etc/ssl/certs/7749.pem"
	I0408 04:43:25.293440    9805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7749.pem
	I0408 04:43:25.294986    9805 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:27 /usr/share/ca-certificates/7749.pem
	I0408 04:43:25.295005    9805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7749.pem
	I0408 04:43:25.296798    9805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7749.pem /etc/ssl/certs/51391683.0"
	I0408 04:43:25.299575    9805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77492.pem && ln -fs /usr/share/ca-certificates/77492.pem /etc/ssl/certs/77492.pem"
	I0408 04:43:25.302424    9805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77492.pem
	I0408 04:43:25.303833    9805 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:27 /usr/share/ca-certificates/77492.pem
	I0408 04:43:25.303855    9805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77492.pem
	I0408 04:43:25.305517    9805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77492.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 04:43:25.308822    9805 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 04:43:25.310409    9805 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 04:43:25.312749    9805 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 04:43:25.314650    9805 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 04:43:25.316646    9805 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 04:43:25.318496    9805 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 04:43:25.320341    9805 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 04:43:25.322181    9805 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51448 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0408 04:43:25.322258    9805 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0408 04:43:25.332976    9805 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 04:43:25.336086    9805 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 04:43:25.336093    9805 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 04:43:25.336095    9805 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 04:43:25.336123    9805 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 04:43:25.338813    9805 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 04:43:25.339106    9805 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-462000" does not appear in /Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:43:25.339212    9805 kubeconfig.go:62] /Users/jenkins/minikube-integration/18588-7343/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-462000" cluster setting kubeconfig missing "stopped-upgrade-462000" context setting]
	I0408 04:43:25.339402    9805 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/kubeconfig: {Name:mk04d6060f19666b377da34a3aa7f8b9bcbb5054 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:43:25.339855    9805 kapi.go:59] client config for stopped-upgrade-462000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/client.key", CAFile:"/Users/jenkins/minikube-integration/18588-7343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1039f7940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0408 04:43:25.340179    9805 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 04:43:25.342879    9805 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-462000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0408 04:43:25.342896    9805 kubeadm.go:1154] stopping kube-system containers ...
	I0408 04:43:25.342938    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0408 04:43:25.353623    9805 docker.go:483] Stopping containers: [00e1dd75f73b 39adc787a95e 3c09b8b966ff f69b3e2174f4 4275b5aac9cf 7acaa22acfc7 479bf6d02b41 c7faa8c96454]
	I0408 04:43:25.353685    9805 ssh_runner.go:195] Run: docker stop 00e1dd75f73b 39adc787a95e 3c09b8b966ff f69b3e2174f4 4275b5aac9cf 7acaa22acfc7 479bf6d02b41 c7faa8c96454
	I0408 04:43:25.364461    9805 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 04:43:25.369701    9805 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 04:43:25.372777    9805 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 04:43:25.372783    9805 kubeadm.go:156] found existing configuration files:
	
	I0408 04:43:25.372803    9805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/admin.conf
	I0408 04:43:25.375309    9805 kubeadm.go:162] "https://control-plane.minikube.internal:51448" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 04:43:25.375333    9805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 04:43:25.377920    9805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/kubelet.conf
	I0408 04:43:25.380808    9805 kubeadm.go:162] "https://control-plane.minikube.internal:51448" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 04:43:25.380828    9805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 04:43:25.383450    9805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/controller-manager.conf
	I0408 04:43:25.385913    9805 kubeadm.go:162] "https://control-plane.minikube.internal:51448" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 04:43:25.385935    9805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 04:43:25.389020    9805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/scheduler.conf
	I0408 04:43:25.391497    9805 kubeadm.go:162] "https://control-plane.minikube.internal:51448" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 04:43:25.391513    9805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 04:43:25.394038    9805 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 04:43:25.397056    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 04:43:25.419064    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 04:43:26.093953    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 04:43:26.227548    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 04:43:26.246954    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 04:43:26.267443    9805 api_server.go:52] waiting for apiserver process to appear ...
	I0408 04:43:26.267520    9805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 04:43:25.984575    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:43:26.769586    9805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 04:43:27.268812    9805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 04:43:27.275222    9805 api_server.go:72] duration metric: took 1.007793459s to wait for apiserver process to appear ...
	I0408 04:43:27.275234    9805 api_server.go:88] waiting for apiserver healthz status ...
	I0408 04:43:27.275243    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:43:30.986703    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:43:30.986822    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:43:30.997418    9654 logs.go:276] 2 containers: [3f2f145a8f16 fc91683e307d]
	I0408 04:43:30.997495    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:43:31.008311    9654 logs.go:276] 2 containers: [bc0b9c25a9da 8854a7b5f3ef]
	I0408 04:43:31.008388    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:43:31.019111    9654 logs.go:276] 1 containers: [36847e65ba03]
	I0408 04:43:31.019182    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:43:31.029471    9654 logs.go:276] 2 containers: [f52f289e8112 bca1fee77bc3]
	I0408 04:43:31.029545    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:43:31.040012    9654 logs.go:276] 1 containers: [9d86accc4f3c]
	I0408 04:43:31.040084    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:43:31.051562    9654 logs.go:276] 2 containers: [9220cf4363b0 90e3f9b0faaa]
	I0408 04:43:31.051632    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:43:31.062201    9654 logs.go:276] 0 containers: []
	W0408 04:43:31.062214    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:43:31.062280    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:43:31.072911    9654 logs.go:276] 2 containers: [753f3c118640 f9f1de9506cf]
	I0408 04:43:31.072948    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:43:31.072955    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:43:31.110847    9654 logs.go:123] Gathering logs for etcd [bc0b9c25a9da] ...
	I0408 04:43:31.110859    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0b9c25a9da"
	I0408 04:43:31.125342    9654 logs.go:123] Gathering logs for kube-controller-manager [9220cf4363b0] ...
	I0408 04:43:31.125356    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9220cf4363b0"
	I0408 04:43:31.142643    9654 logs.go:123] Gathering logs for storage-provisioner [753f3c118640] ...
	I0408 04:43:31.142658    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753f3c118640"
	I0408 04:43:31.155061    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:43:31.155072    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:43:31.178927    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:43:31.178934    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:43:31.218308    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:43:31.218401    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:43:31.218934    9654 logs.go:123] Gathering logs for kube-apiserver [3f2f145a8f16] ...
	I0408 04:43:31.218938    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f145a8f16"
	I0408 04:43:31.232595    9654 logs.go:123] Gathering logs for kube-scheduler [f52f289e8112] ...
	I0408 04:43:31.232607    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f52f289e8112"
	I0408 04:43:31.244301    9654 logs.go:123] Gathering logs for kube-controller-manager [90e3f9b0faaa] ...
	I0408 04:43:31.244312    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90e3f9b0faaa"
	I0408 04:43:31.255638    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:43:31.255651    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:43:31.267330    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:43:31.267342    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:43:31.272164    9654 logs.go:123] Gathering logs for kube-apiserver [fc91683e307d] ...
	I0408 04:43:31.272178    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc91683e307d"
	I0408 04:43:31.284744    9654 logs.go:123] Gathering logs for etcd [8854a7b5f3ef] ...
	I0408 04:43:31.284757    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8854a7b5f3ef"
	I0408 04:43:31.299618    9654 logs.go:123] Gathering logs for coredns [36847e65ba03] ...
	I0408 04:43:31.299628    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36847e65ba03"
	I0408 04:43:31.311220    9654 logs.go:123] Gathering logs for kube-scheduler [bca1fee77bc3] ...
	I0408 04:43:31.311233    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bca1fee77bc3"
	I0408 04:43:31.323132    9654 logs.go:123] Gathering logs for kube-proxy [9d86accc4f3c] ...
	I0408 04:43:31.323144    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d86accc4f3c"
	I0408 04:43:31.336350    9654 logs.go:123] Gathering logs for storage-provisioner [f9f1de9506cf] ...
	I0408 04:43:31.336365    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f1de9506cf"
	I0408 04:43:31.350227    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:43:31.350242    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:43:31.350281    9654 out.go:239] X Problems detected in kubelet:
	W0408 04:43:31.350286    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:43:31.350289    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:43:31.350293    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:43:31.350296    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:43:32.277268    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:43:32.277332    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:43:37.277495    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:43:37.277544    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:43:41.354262    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:43:42.278144    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:43:42.278183    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:43:46.356416    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:43:46.356589    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:43:46.367190    9654 logs.go:276] 2 containers: [3f2f145a8f16 fc91683e307d]
	I0408 04:43:46.367266    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:43:46.378006    9654 logs.go:276] 2 containers: [bc0b9c25a9da 8854a7b5f3ef]
	I0408 04:43:46.378091    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:43:46.388495    9654 logs.go:276] 1 containers: [36847e65ba03]
	I0408 04:43:46.388569    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:43:46.399065    9654 logs.go:276] 2 containers: [f52f289e8112 bca1fee77bc3]
	I0408 04:43:46.399157    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:43:46.410379    9654 logs.go:276] 1 containers: [9d86accc4f3c]
	I0408 04:43:46.410452    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:43:46.420974    9654 logs.go:276] 2 containers: [9220cf4363b0 90e3f9b0faaa]
	I0408 04:43:46.421061    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:43:46.431293    9654 logs.go:276] 0 containers: []
	W0408 04:43:46.431304    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:43:46.431376    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:43:46.442287    9654 logs.go:276] 2 containers: [753f3c118640 f9f1de9506cf]
	I0408 04:43:46.442305    9654 logs.go:123] Gathering logs for kube-apiserver [3f2f145a8f16] ...
	I0408 04:43:46.442321    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f145a8f16"
	I0408 04:43:46.459903    9654 logs.go:123] Gathering logs for etcd [bc0b9c25a9da] ...
	I0408 04:43:46.459914    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0b9c25a9da"
	I0408 04:43:46.473745    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:43:46.473758    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:43:46.478585    9654 logs.go:123] Gathering logs for kube-apiserver [fc91683e307d] ...
	I0408 04:43:46.478591    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc91683e307d"
	I0408 04:43:46.490277    9654 logs.go:123] Gathering logs for kube-controller-manager [9220cf4363b0] ...
	I0408 04:43:46.490291    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9220cf4363b0"
	I0408 04:43:46.507685    9654 logs.go:123] Gathering logs for kube-controller-manager [90e3f9b0faaa] ...
	I0408 04:43:46.507695    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90e3f9b0faaa"
	I0408 04:43:46.522019    9654 logs.go:123] Gathering logs for storage-provisioner [f9f1de9506cf] ...
	I0408 04:43:46.522030    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f1de9506cf"
	I0408 04:43:46.533347    9654 logs.go:123] Gathering logs for etcd [8854a7b5f3ef] ...
	I0408 04:43:46.533358    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8854a7b5f3ef"
	I0408 04:43:46.552569    9654 logs.go:123] Gathering logs for kube-scheduler [f52f289e8112] ...
	I0408 04:43:46.552579    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f52f289e8112"
	I0408 04:43:46.564554    9654 logs.go:123] Gathering logs for kube-scheduler [bca1fee77bc3] ...
	I0408 04:43:46.564564    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bca1fee77bc3"
	I0408 04:43:46.576980    9654 logs.go:123] Gathering logs for kube-proxy [9d86accc4f3c] ...
	I0408 04:43:46.576993    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d86accc4f3c"
	I0408 04:43:46.589355    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:43:46.589366    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:43:46.614168    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:43:46.614183    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:43:46.650692    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:43:46.650785    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:43:46.651319    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:43:46.651330    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:43:46.692342    9654 logs.go:123] Gathering logs for coredns [36847e65ba03] ...
	I0408 04:43:46.692359    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36847e65ba03"
	I0408 04:43:46.709619    9654 logs.go:123] Gathering logs for storage-provisioner [753f3c118640] ...
	I0408 04:43:46.709632    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753f3c118640"
	I0408 04:43:46.723086    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:43:46.723096    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:43:46.734680    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:43:46.734690    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:43:46.734715    9654 out.go:239] X Problems detected in kubelet:
	W0408 04:43:46.734721    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:43:46.734726    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:43:46.734730    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:43:46.734735    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:43:47.278653    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:43:47.278693    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:43:52.279436    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:43:52.279573    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:43:56.735510    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:43:57.280813    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:43:57.280854    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:01.736297    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:01.736371    9654 kubeadm.go:591] duration metric: took 4m7.312357125s to restartPrimaryControlPlane
	W0408 04:44:01.736438    9654 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 04:44:01.736462    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0408 04:44:02.730101    9654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 04:44:02.735356    9654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 04:44:02.738238    9654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 04:44:02.741193    9654 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 04:44:02.741200    9654 kubeadm.go:156] found existing configuration files:
	
	I0408 04:44:02.741226    9654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/admin.conf
	I0408 04:44:02.743857    9654 kubeadm.go:162] "https://control-plane.minikube.internal:51241" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 04:44:02.743882    9654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 04:44:02.746618    9654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/kubelet.conf
	I0408 04:44:02.749755    9654 kubeadm.go:162] "https://control-plane.minikube.internal:51241" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 04:44:02.749774    9654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 04:44:02.752850    9654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/controller-manager.conf
	I0408 04:44:02.755294    9654 kubeadm.go:162] "https://control-plane.minikube.internal:51241" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 04:44:02.755315    9654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 04:44:02.758272    9654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/scheduler.conf
	I0408 04:44:02.761078    9654 kubeadm.go:162] "https://control-plane.minikube.internal:51241" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51241 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 04:44:02.761096    9654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 04:44:02.763730    9654 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 04:44:02.780855    9654 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0408 04:44:02.780897    9654 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 04:44:02.831868    9654 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 04:44:02.831938    9654 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 04:44:02.831999    9654 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 04:44:02.880292    9654 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 04:44:02.884500    9654 out.go:204]   - Generating certificates and keys ...
	I0408 04:44:02.884538    9654 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 04:44:02.884567    9654 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 04:44:02.884612    9654 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 04:44:02.884641    9654 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 04:44:02.884680    9654 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 04:44:02.884705    9654 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 04:44:02.884738    9654 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 04:44:02.884770    9654 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 04:44:02.884832    9654 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 04:44:02.884982    9654 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 04:44:02.885009    9654 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 04:44:02.885037    9654 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 04:44:02.976821    9654 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 04:44:03.098756    9654 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 04:44:03.199435    9654 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 04:44:03.293163    9654 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 04:44:03.321421    9654 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 04:44:03.321743    9654 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 04:44:03.321766    9654 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 04:44:03.407932    9654 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 04:44:03.412166    9654 out.go:204]   - Booting up control plane ...
	I0408 04:44:03.412211    9654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 04:44:03.412251    9654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 04:44:03.412287    9654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 04:44:03.412407    9654 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 04:44:03.413513    9654 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 04:44:02.282106    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:02.282135    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:07.919498    9654 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.505789 seconds
	I0408 04:44:07.919577    9654 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 04:44:07.924552    9654 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 04:44:08.444417    9654 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 04:44:08.444822    9654 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-835000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 04:44:08.948794    9654 kubeadm.go:309] [bootstrap-token] Using token: mc4h03.s6znjht445679d25
	I0408 04:44:08.953343    9654 out.go:204]   - Configuring RBAC rules ...
	I0408 04:44:08.953407    9654 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 04:44:08.953456    9654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 04:44:08.961488    9654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 04:44:08.962591    9654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 04:44:08.963646    9654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 04:44:08.964752    9654 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 04:44:08.970423    9654 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 04:44:09.142274    9654 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 04:44:09.353653    9654 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 04:44:09.354019    9654 kubeadm.go:309] 
	I0408 04:44:09.354048    9654 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 04:44:09.354053    9654 kubeadm.go:309] 
	I0408 04:44:09.354094    9654 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 04:44:09.354098    9654 kubeadm.go:309] 
	I0408 04:44:09.354113    9654 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 04:44:09.354146    9654 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 04:44:09.354171    9654 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 04:44:09.354175    9654 kubeadm.go:309] 
	I0408 04:44:09.354202    9654 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 04:44:09.354207    9654 kubeadm.go:309] 
	I0408 04:44:09.354230    9654 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 04:44:09.354235    9654 kubeadm.go:309] 
	I0408 04:44:09.354266    9654 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 04:44:09.354300    9654 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 04:44:09.354332    9654 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 04:44:09.354334    9654 kubeadm.go:309] 
	I0408 04:44:09.354372    9654 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 04:44:09.354406    9654 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 04:44:09.354408    9654 kubeadm.go:309] 
	I0408 04:44:09.354446    9654 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token mc4h03.s6znjht445679d25 \
	I0408 04:44:09.354491    9654 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:63c1082056c9546e83bc7e238ddca3361d3bc0d4a9173109edd9ba5d9e410231 \
	I0408 04:44:09.354500    9654 kubeadm.go:309] 	--control-plane 
	I0408 04:44:09.354502    9654 kubeadm.go:309] 
	I0408 04:44:09.354540    9654 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 04:44:09.354543    9654 kubeadm.go:309] 
	I0408 04:44:09.354580    9654 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token mc4h03.s6znjht445679d25 \
	I0408 04:44:09.354646    9654 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:63c1082056c9546e83bc7e238ddca3361d3bc0d4a9173109edd9ba5d9e410231 
	I0408 04:44:09.354699    9654 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 04:44:09.354704    9654 cni.go:84] Creating CNI manager for ""
	I0408 04:44:09.354712    9654 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:44:09.358995    9654 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 04:44:09.366926    9654 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 04:44:09.370248    9654 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 04:44:09.375071    9654 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 04:44:09.375148    9654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-835000 minikube.k8s.io/updated_at=2024_04_08T04_44_09_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=running-upgrade-835000 minikube.k8s.io/primary=true
	I0408 04:44:09.375149    9654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 04:44:09.409671    9654 kubeadm.go:1107] duration metric: took 34.552416ms to wait for elevateKubeSystemPrivileges
	I0408 04:44:09.409725    9654 ops.go:34] apiserver oom_adj: -16
	W0408 04:44:09.418535    9654 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 04:44:09.418547    9654 kubeadm.go:393] duration metric: took 4m15.022107042s to StartCluster
	I0408 04:44:09.418559    9654 settings.go:142] acquiring lock: {Name:mkd5c8378547f472aec7259eff81e77b1454222f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:44:09.418700    9654 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:44:09.419051    9654 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/kubeconfig: {Name:mk04d6060f19666b377da34a3aa7f8b9bcbb5054 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:44:09.419285    9654 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:44:09.425827    9654 out.go:177] * Verifying Kubernetes components...
	I0408 04:44:09.419315    9654 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 04:44:09.419578    9654 config.go:182] Loaded profile config "running-upgrade-835000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 04:44:09.437961    9654 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-835000"
	I0408 04:44:09.437972    9654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 04:44:09.437974    9654 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-835000"
	W0408 04:44:09.437977    9654 addons.go:243] addon storage-provisioner should already be in state true
	I0408 04:44:09.437986    9654 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-835000"
	I0408 04:44:09.437990    9654 host.go:66] Checking if "running-upgrade-835000" exists ...
	I0408 04:44:09.437997    9654 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-835000"
	I0408 04:44:09.439115    9654 kapi.go:59] client config for running-upgrade-835000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/running-upgrade-835000/client.key", CAFile:"/Users/jenkins/minikube-integration/18588-7343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10237f940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0408 04:44:09.440369    9654 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-835000"
	W0408 04:44:09.440374    9654 addons.go:243] addon default-storageclass should already be in state true
	I0408 04:44:09.440384    9654 host.go:66] Checking if "running-upgrade-835000" exists ...
	I0408 04:44:09.444837    9654 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 04:44:07.283693    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:07.283769    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:09.452876    9654 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 04:44:09.452884    9654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 04:44:09.452891    9654 sshutil.go:53] new ssh client: &{IP:localhost Port:51209 SSHKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/running-upgrade-835000/id_rsa Username:docker}
	I0408 04:44:09.453543    9654 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 04:44:09.453548    9654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 04:44:09.453552    9654 sshutil.go:53] new ssh client: &{IP:localhost Port:51209 SSHKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/running-upgrade-835000/id_rsa Username:docker}
	I0408 04:44:09.533239    9654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 04:44:09.538595    9654 api_server.go:52] waiting for apiserver process to appear ...
	I0408 04:44:09.538639    9654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 04:44:09.543335    9654 api_server.go:72] duration metric: took 124.035292ms to wait for apiserver process to appear ...
	I0408 04:44:09.543364    9654 api_server.go:88] waiting for apiserver healthz status ...
	I0408 04:44:09.543372    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:09.561629    9654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 04:44:09.566009    9654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 04:44:12.284148    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:12.284184    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:14.545382    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:14.545411    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:17.286325    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:17.286393    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:19.545586    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:19.545655    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:22.288590    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:22.288611    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:24.545862    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:24.545935    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:27.290748    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:27.290935    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:44:27.302828    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:44:27.302908    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:44:27.313644    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:44:27.313714    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:44:27.323982    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:44:27.324049    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:44:27.339210    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:44:27.339287    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:44:27.349447    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:44:27.349585    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:44:27.360127    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:44:27.360209    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:44:27.370271    9805 logs.go:276] 0 containers: []
	W0408 04:44:27.370284    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:44:27.370355    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:44:27.380896    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:44:27.380913    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:44:27.380918    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:44:27.403591    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:44:27.403602    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:44:27.417499    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:44:27.417511    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:44:27.429184    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:44:27.429196    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:44:27.441020    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:44:27.441035    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:44:27.458548    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:44:27.458559    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:44:27.484941    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:44:27.484954    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:44:27.498058    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:44:27.498069    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:44:27.536554    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:44:27.536565    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:44:27.652392    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:44:27.652405    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:44:27.667009    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:44:27.667022    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:44:27.678760    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:44:27.678777    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:44:27.694005    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:44:27.694022    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:44:27.709664    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:44:27.709675    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:44:27.721101    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:44:27.721112    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:44:27.725740    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:44:27.725748    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:44:27.753052    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:44:27.753064    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:44:30.271744    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:29.546184    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:29.546207    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:35.273870    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:35.274040    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:44:35.294944    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:44:35.295038    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:44:35.306130    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:44:35.306202    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:44:35.316512    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:44:35.316582    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:44:35.326970    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:44:35.327035    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:44:35.340425    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:44:35.340505    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:44:35.351246    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:44:35.351332    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:44:35.362645    9805 logs.go:276] 0 containers: []
	W0408 04:44:35.362656    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:44:35.362721    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:44:35.373966    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:44:35.374002    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:44:35.374009    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:44:35.385327    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:44:35.385339    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:44:35.405118    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:44:35.405129    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:44:35.419210    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:44:35.419221    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:44:35.457937    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:44:35.457946    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:44:35.472695    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:44:35.472706    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:44:35.493659    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:44:35.493672    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:44:35.521575    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:44:35.521583    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:44:35.547149    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:44:35.547161    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:44:35.560967    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:44:35.560981    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:44:35.575821    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:44:35.575833    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:44:35.590948    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:44:35.590959    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:44:35.604997    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:44:35.605009    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:44:35.618972    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:44:35.618988    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:44:35.630799    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:44:35.630817    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:44:35.642499    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:44:35.642510    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:44:35.646940    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:44:35.646949    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:44:34.546603    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:34.546629    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:39.547192    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:39.547214    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0408 04:44:39.914608    9654 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0408 04:44:39.919335    9654 out.go:177] * Enabled addons: storage-provisioner
	I0408 04:44:38.186620    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:39.931262    9654 addons.go:505] duration metric: took 30.512388583s for enable addons: enabled=[storage-provisioner]
	I0408 04:44:43.188862    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:43.189074    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:44:43.205432    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:44:43.205538    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:44:43.218483    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:44:43.218580    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:44:43.229910    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:44:43.229985    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:44:43.240787    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:44:43.240864    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:44:43.251331    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:44:43.251402    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:44:43.262078    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:44:43.262150    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:44:43.272360    9805 logs.go:276] 0 containers: []
	W0408 04:44:43.272370    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:44:43.272440    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:44:43.282833    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:44:43.282863    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:44:43.282869    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:44:43.286934    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:44:43.286943    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:44:43.300493    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:44:43.300505    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:44:43.314735    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:44:43.314765    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:44:43.326807    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:44:43.326817    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:44:43.343894    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:44:43.343904    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:44:43.354831    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:44:43.354840    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:44:43.366072    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:44:43.366081    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:44:43.377544    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:44:43.377555    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:44:43.392533    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:44:43.392544    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:44:43.420521    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:44:43.420534    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:44:43.433039    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:44:43.433053    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:44:43.471857    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:44:43.471869    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:44:43.508905    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:44:43.508919    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:44:43.533684    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:44:43.533699    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:44:43.548031    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:44:43.548042    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:44:43.558978    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:44:43.558988    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:44:46.080599    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:44.547927    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:44.547965    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:51.082723    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:51.082885    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:44:51.094564    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:44:51.094644    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:44:51.105761    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:44:51.105830    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:44:51.119160    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:44:51.119231    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:44:51.129401    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:44:51.129487    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:44:51.139857    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:44:51.139930    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:44:51.159090    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:44:51.159160    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:44:51.172470    9805 logs.go:276] 0 containers: []
	W0408 04:44:51.172484    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:44:51.172547    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:44:51.182878    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:44:51.182897    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:44:51.182902    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:44:51.219548    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:44:51.219560    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:44:51.223580    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:44:51.223595    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:44:51.247999    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:44:51.248010    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:44:51.265928    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:44:51.265941    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:44:51.277372    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:44:51.277388    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:44:51.315827    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:44:51.315842    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:44:49.548681    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:49.548727    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:51.329723    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:44:51.329733    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:44:51.343390    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:44:51.343403    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:44:51.357399    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:44:51.357410    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:44:51.369275    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:44:51.369286    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:44:51.383744    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:44:51.383759    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:44:51.400925    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:44:51.400937    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:44:51.425026    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:44:51.425034    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:44:51.440554    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:44:51.440564    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:44:51.451397    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:44:51.451407    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:44:51.462264    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:44:51.462274    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:44:53.976173    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:54.549963    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:54.549999    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:58.978715    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:58.979035    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:44:59.004879    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:44:59.004990    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:44:59.023550    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:44:59.023640    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:44:59.042917    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:44:59.042997    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:44:59.054594    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:44:59.054686    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:44:59.065093    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:44:59.065159    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:44:59.075620    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:44:59.075696    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:44:59.092268    9805 logs.go:276] 0 containers: []
	W0408 04:44:59.092279    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:44:59.092341    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:44:59.103016    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:44:59.103034    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:44:59.103040    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:44:59.114640    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:44:59.114664    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:44:59.151431    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:44:59.151443    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:44:59.166993    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:44:59.167007    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:44:59.179288    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:44:59.179302    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:44:59.190640    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:44:59.190652    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:44:59.213947    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:44:59.213959    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:44:59.251729    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:44:59.251743    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:44:59.267958    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:44:59.267970    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:44:59.282421    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:44:59.282435    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:44:59.296486    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:44:59.296501    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:44:59.309256    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:44:59.309267    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:44:59.313846    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:44:59.313853    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:44:59.340367    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:44:59.340382    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:44:59.357995    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:44:59.358010    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:44:59.370048    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:44:59.370058    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:44:59.384425    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:44:59.384436    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:44:59.551636    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:59.551660    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:01.901566    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:04.553529    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:04.553555    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:06.904099    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:06.904331    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:45:06.923957    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:45:06.924057    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:45:06.937628    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:45:06.937708    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:45:06.950087    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:45:06.950160    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:45:06.961199    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:45:06.961283    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:45:06.971864    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:45:06.971930    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:45:06.982166    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:45:06.982233    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:45:06.992348    9805 logs.go:276] 0 containers: []
	W0408 04:45:06.992358    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:45:06.992414    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:45:07.002664    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:45:07.002681    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:45:07.002688    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:45:07.040078    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:45:07.040091    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:45:07.054710    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:45:07.054723    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:45:07.068151    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:45:07.068164    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:45:07.079813    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:45:07.079825    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:45:07.084212    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:45:07.084220    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:45:07.109439    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:45:07.109450    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:45:07.132011    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:45:07.132021    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:45:07.148595    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:45:07.148605    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:45:07.160929    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:45:07.160940    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:45:07.174446    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:45:07.174457    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:45:07.186122    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:45:07.186136    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:45:07.220971    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:45:07.220982    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:45:07.232776    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:45:07.232788    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:45:07.245497    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:45:07.245508    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:45:07.261089    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:45:07.261098    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:45:07.272631    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:45:07.272642    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:45:09.798852    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:09.555024    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:09.555211    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:45:09.604791    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:45:09.604867    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:45:09.617091    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:45:09.617160    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:45:09.629002    9654 logs.go:276] 2 containers: [89d305451507 238b4c800085]
	I0408 04:45:09.629079    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:45:09.646638    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:45:09.646713    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:45:09.658776    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:45:09.658850    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:45:09.669861    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:45:09.669933    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:45:09.679943    9654 logs.go:276] 0 containers: []
	W0408 04:45:09.679954    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:45:09.680009    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:45:09.693776    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:45:09.693793    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:45:09.693799    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:45:09.705171    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:45:09.705184    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:45:09.717479    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:45:09.717492    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:45:09.734638    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:45:09.734652    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:45:09.748947    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:45:09.748959    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:45:09.753853    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:45:09.753861    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:45:09.814250    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:45:09.814264    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:45:09.829366    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:45:09.829380    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:45:09.845394    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:45:09.845404    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:45:09.857479    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:45:09.857490    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:45:09.881482    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:45:09.881490    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:45:09.893744    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:45:09.893755    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:45:09.911390    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:45:09.911483    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:45:09.927384    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:45:09.927390    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:45:09.942425    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:45:09.942434    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:45:09.942462    9654 out.go:239] X Problems detected in kubelet:
	W0408 04:45:09.942466    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:45:09.942470    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:45:09.942474    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:45:09.942477    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:45:14.801006    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:14.801216    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:45:14.818900    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:45:14.819008    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:45:14.832862    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:45:14.832936    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:45:14.844753    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:45:14.844818    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:45:14.855412    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:45:14.855482    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:45:14.866255    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:45:14.866327    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:45:14.877305    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:45:14.877374    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:45:14.888066    9805 logs.go:276] 0 containers: []
	W0408 04:45:14.888077    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:45:14.888137    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:45:14.899757    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:45:14.899786    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:45:14.899792    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:45:14.910875    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:45:14.910888    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:45:14.925443    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:45:14.925457    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:45:14.936880    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:45:14.936894    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:45:14.948838    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:45:14.948848    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:45:14.953570    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:45:14.953579    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:45:14.970606    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:45:14.970617    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:45:14.981621    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:45:14.981633    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:45:15.005456    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:45:15.005468    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:45:15.041976    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:45:15.041989    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:45:15.056010    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:45:15.056019    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:45:15.083396    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:45:15.083407    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:45:15.107352    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:45:15.107364    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:45:15.123529    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:45:15.123540    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:45:15.136060    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:45:15.136071    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:45:15.173237    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:45:15.173247    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:45:15.186781    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:45:15.186792    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:45:17.702722    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:19.946565    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:22.704954    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:22.705114    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:45:22.720031    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:45:22.720109    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:45:22.731175    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:45:22.731249    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:45:22.743785    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:45:22.743852    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:45:22.757998    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:45:22.758070    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:45:22.768307    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:45:22.768385    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:45:22.778660    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:45:22.778726    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:45:22.789152    9805 logs.go:276] 0 containers: []
	W0408 04:45:22.789163    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:45:22.789217    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:45:22.802903    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:45:22.802919    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:45:22.802928    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:45:22.814552    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:45:22.814564    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:45:22.827018    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:45:22.827030    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:45:22.841210    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:45:22.841221    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:45:22.852536    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:45:22.852548    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:45:22.867647    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:45:22.867660    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:45:22.886038    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:45:22.886048    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:45:22.897902    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:45:22.897912    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:45:22.914863    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:45:22.914877    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:45:22.939175    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:45:22.939183    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:45:22.978220    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:45:22.978228    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:45:23.015393    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:45:23.015405    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:45:23.028336    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:45:23.028346    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:45:23.040045    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:45:23.040056    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:45:23.044731    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:45:23.044739    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:45:23.059183    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:45:23.059193    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:45:23.084907    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:45:23.084918    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:45:25.604705    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:24.948990    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:24.949243    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:45:24.974593    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:45:24.974720    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:45:24.991922    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:45:24.992027    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:45:25.005348    9654 logs.go:276] 2 containers: [89d305451507 238b4c800085]
	I0408 04:45:25.005425    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:45:25.021559    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:45:25.021636    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:45:25.032555    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:45:25.032630    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:45:25.044825    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:45:25.044897    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:45:25.055249    9654 logs.go:276] 0 containers: []
	W0408 04:45:25.055264    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:45:25.055324    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:45:25.065860    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:45:25.065877    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:45:25.065882    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:45:25.078299    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:45:25.078310    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:45:25.090481    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:45:25.090491    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:45:25.108296    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:45:25.108398    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:45:25.125325    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:45:25.125344    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:45:25.164102    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:45:25.164113    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:45:25.182593    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:45:25.182604    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:45:25.197860    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:45:25.197871    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:45:25.210431    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:45:25.210442    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:45:25.232308    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:45:25.232323    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:45:25.244054    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:45:25.244064    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:45:25.267212    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:45:25.267221    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:45:25.271546    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:45:25.271553    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:45:25.285612    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:45:25.285625    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:45:25.297302    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:45:25.297313    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:45:25.297340    9654 out.go:239] X Problems detected in kubelet:
	W0408 04:45:25.297345    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:45:25.297348    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:45:25.297354    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:45:25.297356    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:45:30.606823    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:30.606991    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:45:30.630570    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:45:30.630648    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:45:30.649892    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:45:30.649968    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:45:30.660821    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:45:30.660893    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:45:30.671839    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:45:30.671916    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:45:30.682466    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:45:30.682541    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:45:30.693142    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:45:30.693218    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:45:30.703488    9805 logs.go:276] 0 containers: []
	W0408 04:45:30.703499    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:45:30.703560    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:45:30.714208    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:45:30.714226    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:45:30.714232    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:45:30.731636    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:45:30.731648    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:45:30.745013    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:45:30.745025    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:45:30.758769    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:45:30.758779    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:45:30.772234    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:45:30.772245    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:45:30.784136    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:45:30.784146    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:45:30.795004    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:45:30.795015    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:45:30.807290    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:45:30.807302    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:45:30.830702    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:45:30.830720    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:45:30.855215    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:45:30.855230    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:45:30.868217    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:45:30.868229    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:45:30.882842    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:45:30.882857    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:45:30.898258    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:45:30.898268    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:45:30.910165    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:45:30.910175    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:45:30.922060    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:45:30.922072    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:45:30.960719    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:45:30.960729    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:45:30.965221    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:45:30.965227    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:45:33.501206    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:35.301344    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:38.503399    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:38.503618    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:45:38.530452    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:45:38.530581    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:45:38.549166    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:45:38.549248    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:45:38.562624    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:45:38.562707    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:45:38.574920    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:45:38.574992    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:45:38.585719    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:45:38.585790    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:45:38.596869    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:45:38.596937    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:45:38.610629    9805 logs.go:276] 0 containers: []
	W0408 04:45:38.610639    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:45:38.610691    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:45:38.624382    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:45:38.624397    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:45:38.624402    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:45:38.663699    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:45:38.663708    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:45:38.698522    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:45:38.698537    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:45:38.712784    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:45:38.712796    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:45:38.724319    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:45:38.724330    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:45:38.749025    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:45:38.749036    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:45:38.760882    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:45:38.760896    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:45:38.774454    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:45:38.774466    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:45:38.794565    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:45:38.794578    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:45:38.798736    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:45:38.798744    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:45:38.812771    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:45:38.812781    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:45:38.837658    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:45:38.837668    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:45:38.855567    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:45:38.855580    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:45:38.875938    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:45:38.875950    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:45:38.888369    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:45:38.888380    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:45:38.903390    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:45:38.903399    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:45:38.926775    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:45:38.926786    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:45:40.303583    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:40.303759    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:45:40.316432    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:45:40.316520    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:45:40.327807    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:45:40.327884    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:45:40.338653    9654 logs.go:276] 2 containers: [89d305451507 238b4c800085]
	I0408 04:45:40.338727    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:45:40.349534    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:45:40.349607    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:45:40.360101    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:45:40.360171    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:45:40.371009    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:45:40.371080    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:45:40.381769    9654 logs.go:276] 0 containers: []
	W0408 04:45:40.381780    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:45:40.381838    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:45:40.392271    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:45:40.392299    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:45:40.392305    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:45:40.396920    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:45:40.396929    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:45:40.410975    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:45:40.410985    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:45:40.431138    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:45:40.431149    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:45:40.444710    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:45:40.444723    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:45:40.456450    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:45:40.456463    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:45:40.478786    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:45:40.478797    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:45:40.501994    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:45:40.502002    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:45:40.519535    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:45:40.519626    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:45:40.535822    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:45:40.535828    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:45:40.547071    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:45:40.547083    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:45:40.558941    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:45:40.558952    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:45:40.574110    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:45:40.574123    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:45:40.585895    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:45:40.585905    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:45:40.623013    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:45:40.623024    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:45:40.623051    9654 out.go:239] X Problems detected in kubelet:
	W0408 04:45:40.623056    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:45:40.623059    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:45:40.623063    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:45:40.623066    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:45:41.440198    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:46.440718    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:46.440872    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:45:46.452605    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:45:46.452686    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:45:46.466175    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:45:46.466251    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:45:46.478186    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:45:46.478257    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:45:46.493490    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:45:46.493564    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:45:46.503851    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:45:46.503926    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:45:46.514453    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:45:46.514526    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:45:46.524995    9805 logs.go:276] 0 containers: []
	W0408 04:45:46.525005    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:45:46.525061    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:45:46.536459    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:45:46.536508    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:45:46.536515    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:45:46.550258    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:45:46.550271    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:45:46.561184    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:45:46.561194    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:45:46.572883    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:45:46.572893    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:45:46.596204    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:45:46.596217    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:45:46.609690    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:45:46.609699    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:45:46.644266    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:45:46.644278    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:45:46.656151    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:45:46.656161    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:45:46.667957    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:45:46.667967    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:45:46.692299    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:45:46.692307    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:45:46.696356    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:45:46.696365    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:45:46.710426    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:45:46.710435    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:45:46.734665    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:45:46.734678    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:45:46.748907    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:45:46.748919    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:45:46.766044    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:45:46.766056    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:45:46.778106    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:45:46.778118    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:45:46.814927    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:45:46.814938    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:45:49.328278    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:50.627089    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:54.330390    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:54.330514    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:45:54.345626    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:45:54.345718    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:45:54.357292    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:45:54.357365    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:45:54.369043    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:45:54.369115    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:45:54.379249    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:45:54.379315    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:45:54.389498    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:45:54.389559    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:45:54.399868    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:45:54.399935    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:45:54.412582    9805 logs.go:276] 0 containers: []
	W0408 04:45:54.412596    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:45:54.412650    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:45:54.422798    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:45:54.422816    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:45:54.422821    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:45:54.436684    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:45:54.436695    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:45:54.461679    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:45:54.461688    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:45:54.473424    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:45:54.473437    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:45:54.487106    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:45:54.487116    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:45:54.498866    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:45:54.498877    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:45:54.513131    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:45:54.513142    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:45:54.525331    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:45:54.525343    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:45:54.542546    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:45:54.542557    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:45:54.554234    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:45:54.554245    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:45:54.593219    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:45:54.593231    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:45:54.597678    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:45:54.597686    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:45:54.611089    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:45:54.611099    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:45:54.625411    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:45:54.625425    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:45:54.648003    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:45:54.648011    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:45:54.660197    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:45:54.660211    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:45:54.696468    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:45:54.696482    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:45:55.629364    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:55.629561    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:45:55.644348    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:45:55.644428    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:45:55.656540    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:45:55.656615    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:45:55.667968    9654 logs.go:276] 2 containers: [89d305451507 238b4c800085]
	I0408 04:45:55.668041    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:45:55.678238    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:45:55.678299    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:45:55.688769    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:45:55.688845    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:45:55.699441    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:45:55.699510    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:45:55.712867    9654 logs.go:276] 0 containers: []
	W0408 04:45:55.712877    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:45:55.712936    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:45:55.723038    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:45:55.723055    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:45:55.723060    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:45:55.737733    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:45:55.737745    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:45:55.749449    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:45:55.749462    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:45:55.764823    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:45:55.764838    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:45:55.780812    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:45:55.780825    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:45:55.798331    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:45:55.798341    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:45:55.811126    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:45:55.811138    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:45:55.815709    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:45:55.815718    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:45:55.857740    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:45:55.857753    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:45:55.868915    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:45:55.868926    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:45:55.894124    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:45:55.894136    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:45:55.905496    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:45:55.905506    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:45:55.924653    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:45:55.924743    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:45:55.940735    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:45:55.940740    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:45:55.961617    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:45:55.961627    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:45:55.961655    9654 out.go:239] X Problems detected in kubelet:
	W0408 04:45:55.961659    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:45:55.961662    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:45:55.961666    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:45:55.961669    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:45:57.210678    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:02.210891    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:02.211075    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:02.225917    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:46:02.226006    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:02.238187    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:46:02.238266    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:02.248847    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:46:02.248936    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:02.259637    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:46:02.259717    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:02.270048    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:46:02.270123    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:02.280607    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:46:02.280675    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:02.290704    9805 logs.go:276] 0 containers: []
	W0408 04:46:02.290714    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:02.290771    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:02.301486    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:46:02.301504    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:02.301511    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:02.343963    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:46:02.343974    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:46:02.365920    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:46:02.365931    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:46:02.379896    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:46:02.379909    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:46:02.391650    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:46:02.391667    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:46:02.405272    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:46:02.405285    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:46:02.416705    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:46:02.416716    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:02.428574    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:02.428589    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:46:02.465511    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:46:02.465519    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:46:02.477171    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:02.477183    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:02.500607    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:46:02.500615    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:46:02.525042    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:46:02.525053    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:46:02.536703    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:46:02.536715    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:46:02.551462    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:46:02.551472    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:46:02.568592    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:02.568603    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:02.572795    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:46:02.572803    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:46:02.584053    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:46:02.584064    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:46:05.106088    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:05.965712    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:10.108396    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:10.108517    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:10.121142    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:46:10.121211    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:10.131458    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:46:10.131516    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:10.142291    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:46:10.142364    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:10.152749    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:46:10.152821    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:10.162854    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:46:10.162911    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:10.173208    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:46:10.173276    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:10.187271    9805 logs.go:276] 0 containers: []
	W0408 04:46:10.187283    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:10.187340    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:10.198998    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:46:10.199016    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:46:10.199022    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:46:10.213307    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:46:10.213318    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:46:10.224385    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:46:10.224396    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:46:10.236508    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:46:10.236519    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:46:10.250919    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:46:10.250934    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:46:10.261875    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:10.261887    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:10.266202    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:10.266208    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:10.301635    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:46:10.301650    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:46:10.330361    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:46:10.330372    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:46:10.349039    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:46:10.349049    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:46:10.362301    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:10.362312    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:10.384744    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:46:10.384755    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:46:10.408391    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:46:10.408400    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:10.420449    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:10.420460    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:46:10.460519    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:46:10.460527    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:46:10.476838    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:46:10.476848    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:46:10.489137    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:46:10.489148    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:46:10.967269    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:10.967405    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:10.983927    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:46:10.984006    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:10.993961    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:46:10.994036    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:11.004615    9654 logs.go:276] 2 containers: [89d305451507 238b4c800085]
	I0408 04:46:11.004690    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:11.015299    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:46:11.015372    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:11.026158    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:46:11.026234    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:11.036759    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:46:11.036830    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:11.047113    9654 logs.go:276] 0 containers: []
	W0408 04:46:11.047125    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:11.047185    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:11.061112    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:46:11.061128    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:46:11.061134    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:46:11.083668    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:46:11.083679    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:46:11.095558    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:46:11.095568    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:46:11.110427    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:46:11.110436    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:46:11.124548    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:11.124558    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:11.149058    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:46:11.149065    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:11.161528    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:46:11.161538    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:46:11.177040    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:11.177056    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:46:11.196239    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:46:11.196331    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:46:11.212065    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:11.212071    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:11.217670    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:11.217681    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:11.254237    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:46:11.254251    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:46:11.268524    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:46:11.268534    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:46:11.280995    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:46:11.281006    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:46:11.298678    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:46:11.298692    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:46:11.298722    9654 out.go:239] X Problems detected in kubelet:
	W0408 04:46:11.298727    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:46:11.298731    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:46:11.298736    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:46:11.298739    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:46:13.003050    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:18.005426    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:18.005863    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:18.047629    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:46:18.047755    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:18.069090    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:46:18.069191    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:18.084119    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:46:18.084194    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:18.103660    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:46:18.103730    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:18.114294    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:46:18.114359    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:18.124861    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:46:18.124928    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:18.135448    9805 logs.go:276] 0 containers: []
	W0408 04:46:18.135462    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:18.135518    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:18.145860    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:46:18.145881    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:46:18.145887    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:46:18.159600    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:18.159612    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:18.182435    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:46:18.182443    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:46:18.197471    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:46:18.197486    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:46:18.211741    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:46:18.211753    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:46:18.229365    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:46:18.229375    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:46:18.254291    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:46:18.254301    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:46:18.269212    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:46:18.269227    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:18.281189    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:18.281204    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:46:18.317857    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:18.317867    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:18.322385    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:46:18.322391    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:46:18.336347    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:46:18.336358    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:46:18.347787    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:18.347796    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:18.385165    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:46:18.385174    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:46:18.396957    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:46:18.396968    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:46:18.410473    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:46:18.410482    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:46:18.421439    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:46:18.421451    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:46:20.938193    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:21.302719    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:25.940405    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:25.940603    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:25.958529    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:46:25.958617    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:25.971692    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:46:25.971773    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:25.983148    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:46:25.983221    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:25.993754    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:46:25.993833    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:26.003926    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:46:26.003997    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:26.017204    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:46:26.017277    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:26.027289    9805 logs.go:276] 0 containers: []
	W0408 04:46:26.027300    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:26.027357    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:26.038134    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:46:26.038151    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:46:26.038156    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:46:26.049845    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:46:26.049858    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:46:26.062022    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:46:26.062034    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:46:26.076944    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:46:26.076957    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:46:26.089163    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:46:26.089173    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:46:26.100594    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:26.100605    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:46:26.139192    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:26.139202    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:26.174146    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:46:26.174160    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:46:26.196355    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:26.196366    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:26.219913    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:46:26.219925    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:46:26.233639    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:46:26.233649    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:46:26.253376    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:46:26.253388    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:46:26.270485    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:46:26.270497    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:46:26.284488    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:26.284499    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:26.289042    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:46:26.289051    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:46:26.317055    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:46:26.317072    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:46:26.303210    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:26.303303    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:26.314875    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:46:26.314952    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:26.326915    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:46:26.326995    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:26.338615    9654 logs.go:276] 4 containers: [cd63449895f2 363d73659586 89d305451507 238b4c800085]
	I0408 04:46:26.338735    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:26.350168    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:46:26.350245    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:26.361320    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:46:26.361389    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:26.371937    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:46:26.372006    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:26.382614    9654 logs.go:276] 0 containers: []
	W0408 04:46:26.382623    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:26.382678    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:26.393520    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:46:26.393532    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:46:26.393536    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:46:26.404645    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:46:26.404655    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:46:26.422204    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:46:26.422214    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:26.434124    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:26.434138    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:46:26.452863    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:46:26.452956    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:46:26.469497    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:46:26.469505    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:46:26.481298    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:46:26.481308    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:46:26.495273    9654 logs.go:123] Gathering logs for coredns [cd63449895f2] ...
	I0408 04:46:26.495287    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd63449895f2"
	I0408 04:46:26.507478    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:46:26.507489    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:46:26.519702    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:46:26.519717    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:46:26.534430    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:46:26.534444    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:46:26.547530    9654 logs.go:123] Gathering logs for coredns [363d73659586] ...
	I0408 04:46:26.547541    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363d73659586"
	I0408 04:46:26.559153    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:46:26.559165    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:46:26.574443    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:26.574454    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:26.600616    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:26.600626    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:26.605494    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:26.605501    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:26.646341    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:46:26.646351    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:46:26.646387    9654 out.go:239] X Problems detected in kubelet:
	W0408 04:46:26.646392    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:46:26.646396    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:46:26.646401    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:46:26.646404    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:46:26.330880    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:46:26.330893    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:28.845436    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:33.846445    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:33.846918    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:33.885975    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:46:33.886116    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:33.908081    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:46:33.908188    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:33.923572    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:46:33.923651    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:33.936157    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:46:33.936226    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:33.948599    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:46:33.948671    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:33.959162    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:46:33.959223    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:33.969636    9805 logs.go:276] 0 containers: []
	W0408 04:46:33.969650    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:33.969707    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:33.987583    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:46:33.987603    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:33.987609    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:34.011429    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:46:34.011438    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:46:34.023712    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:46:34.023724    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:46:34.038428    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:46:34.038442    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:46:34.050639    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:46:34.050652    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:46:34.061603    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:34.061614    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:34.065522    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:46:34.065528    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:46:34.080139    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:46:34.080153    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:46:34.098095    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:46:34.098115    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:46:34.110987    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:46:34.111000    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:34.123980    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:34.123996    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:46:34.163929    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:46:34.163946    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:46:34.189545    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:46:34.189556    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:46:34.206714    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:46:34.206725    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:46:34.220014    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:34.220024    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:34.256824    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:46:34.256835    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:46:34.270465    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:46:34.270475    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:46:36.650403    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:36.784485    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:41.652678    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:41.652882    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:41.669336    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:46:41.669416    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:41.681635    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:46:41.681716    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:41.693531    9654 logs.go:276] 4 containers: [cd63449895f2 363d73659586 89d305451507 238b4c800085]
	I0408 04:46:41.693606    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:41.704058    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:46:41.704123    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:41.714262    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:46:41.714323    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:41.724825    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:46:41.724899    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:41.736455    9654 logs.go:276] 0 containers: []
	W0408 04:46:41.736465    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:41.736521    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:41.746984    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:46:41.747002    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:46:41.747008    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:46:41.763940    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:41.763951    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:41.787676    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:41.787685    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:46:41.806723    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:46:41.806819    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:46:41.823635    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:46:41.823651    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:46:41.840593    9654 logs.go:123] Gathering logs for coredns [cd63449895f2] ...
	I0408 04:46:41.840603    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd63449895f2"
	I0408 04:46:41.853073    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:46:41.853083    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:46:41.865823    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:46:41.865834    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:46:41.878509    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:46:41.878520    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:46:41.894875    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:41.894888    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:41.933280    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:46:41.933291    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:46:41.951933    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:41.951942    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:41.956994    9654 logs.go:123] Gathering logs for coredns [363d73659586] ...
	I0408 04:46:41.957006    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363d73659586"
	I0408 04:46:41.969216    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:46:41.969227    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:46:41.982814    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:46:41.982822    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:46:41.995331    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:46:41.995346    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:42.007782    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:46:42.007792    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:46:42.007817    9654 out.go:239] X Problems detected in kubelet:
	W0408 04:46:42.007822    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:46:42.007826    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:46:42.007830    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:46:42.007833    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:46:41.786593    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:41.786666    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:41.798300    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:46:41.798376    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:41.810237    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:46:41.810309    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:41.820877    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:46:41.820954    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:41.832304    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:46:41.832379    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:41.851404    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:46:41.851481    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:41.862975    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:46:41.863048    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:41.874162    9805 logs.go:276] 0 containers: []
	W0408 04:46:41.874173    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:41.874233    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:41.885604    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:46:41.885623    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:46:41.885629    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:46:41.898180    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:46:41.898194    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:46:41.912332    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:46:41.912347    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:46:41.938468    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:46:41.938481    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:46:41.950945    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:46:41.950957    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:46:41.967601    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:46:41.967615    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:46:41.981378    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:46:41.981393    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:46:41.996064    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:41.996075    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:46:42.034635    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:42.034645    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:42.072603    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:46:42.072615    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:46:42.086849    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:46:42.086860    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:46:42.105405    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:46:42.105419    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:42.118080    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:42.118091    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:42.122434    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:46:42.122441    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:46:42.134534    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:42.134547    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:42.159023    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:46:42.159036    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:46:42.173357    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:46:42.173372    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:46:44.687043    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:49.689179    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:49.689361    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:49.700136    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:46:49.700216    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:49.714499    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:46:49.714574    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:49.724936    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:46:49.725006    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:49.736219    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:46:49.736292    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:49.747141    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:46:49.747215    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:49.757308    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:46:49.757374    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:49.767544    9805 logs.go:276] 0 containers: []
	W0408 04:46:49.767554    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:49.767612    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:49.778606    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:46:49.778624    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:49.778630    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:46:49.817371    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:49.817384    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:49.821573    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:46:49.821581    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:46:49.835423    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:46:49.835436    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:46:49.860144    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:46:49.860157    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:46:49.871806    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:46:49.871817    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:49.884708    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:49.884720    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:49.921183    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:46:49.921196    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:46:49.935665    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:46:49.935676    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:46:49.948213    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:46:49.948227    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:46:49.960642    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:46:49.960651    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:46:49.972147    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:46:49.972158    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:46:49.984361    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:46:49.984374    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:46:49.999509    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:46:49.999519    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:46:50.016142    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:46:50.016153    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:46:50.043496    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:46:50.043509    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:46:50.060835    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:50.060846    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:52.010644    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:52.584741    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:57.013116    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:57.013388    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:57.033490    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:46:57.033579    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:57.048248    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:46:57.048326    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:57.060272    9654 logs.go:276] 4 containers: [cd63449895f2 363d73659586 89d305451507 238b4c800085]
	I0408 04:46:57.060341    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:57.070739    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:46:57.070812    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:57.081235    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:46:57.081300    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:57.094258    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:46:57.094329    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:57.104904    9654 logs.go:276] 0 containers: []
	W0408 04:46:57.104914    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:57.104978    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:57.115527    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:46:57.115542    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:46:57.115548    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:46:57.130011    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:46:57.130022    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:46:57.142330    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:46:57.142339    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:46:57.157002    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:46:57.157011    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:46:57.181695    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:46:57.181704    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:46:57.193220    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:46:57.193229    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:57.209590    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:57.209599    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:46:57.227729    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:46:57.227832    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:46:57.244283    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:57.244290    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:57.279625    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:46:57.279634    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:46:57.315855    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:57.315865    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:57.320455    9654 logs.go:123] Gathering logs for coredns [363d73659586] ...
	I0408 04:46:57.320462    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363d73659586"
	I0408 04:46:57.332138    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:46:57.332148    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:46:57.345639    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:57.345650    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:57.369151    9654 logs.go:123] Gathering logs for coredns [cd63449895f2] ...
	I0408 04:46:57.369159    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd63449895f2"
	I0408 04:46:57.380932    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:46:57.380942    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:46:57.392427    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:46:57.392438    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:46:57.392465    9654 out.go:239] X Problems detected in kubelet:
	W0408 04:46:57.392471    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:46:57.392475    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:46:57.392480    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:46:57.392482    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:46:57.586859    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:57.586998    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:57.598405    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:46:57.598492    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:57.609271    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:46:57.609351    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:57.619653    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:46:57.619726    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:57.633643    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:46:57.633712    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:57.644397    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:46:57.644470    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:57.654769    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:46:57.654836    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:57.665667    9805 logs.go:276] 0 containers: []
	W0408 04:46:57.665678    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:57.665744    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:57.676287    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:46:57.676308    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:46:57.676316    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:46:57.688502    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:46:57.688513    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:46:57.700300    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:57.700314    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:46:57.739533    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:57.739545    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:57.744415    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:46:57.744421    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:46:57.756093    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:46:57.756103    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:46:57.771037    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:46:57.771052    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:46:57.788479    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:46:57.788489    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:46:57.802173    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:46:57.802187    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:57.815044    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:46:57.815054    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:46:57.829399    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:46:57.829411    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:46:57.840467    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:46:57.840478    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:46:57.865662    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:46:57.865675    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:46:57.879509    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:57.879520    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:57.901695    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:57.901702    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:57.937679    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:46:57.937692    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:46:57.951707    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:46:57.951717    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:47:00.468839    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:05.471128    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:05.471475    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:47:05.501655    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:47:05.501788    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:47:05.519082    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:47:05.519169    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:47:05.532815    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:47:05.532884    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:47:05.544710    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:47:05.544789    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:47:05.555754    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:47:05.555824    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:47:05.566676    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:47:05.566749    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:47:05.577560    9805 logs.go:276] 0 containers: []
	W0408 04:47:05.577571    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:47:05.577633    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:47:05.588515    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:47:05.588532    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:47:05.588537    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:47:05.600519    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:47:05.600532    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:47:05.611867    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:47:05.611879    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:47:05.615976    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:47:05.615986    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:47:05.632283    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:47:05.632297    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:47:05.649693    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:47:05.649704    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:47:05.667377    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:47:05.667387    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:47:05.690862    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:47:05.690874    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:47:05.703462    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:47:05.703476    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:47:05.738980    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:47:05.738996    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:47:05.750765    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:47:05.750777    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:47:05.790351    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:47:05.790359    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:47:05.804506    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:47:05.804519    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:47:05.817972    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:47:05.817982    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:47:05.829233    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:47:05.829243    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:47:05.844619    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:47:05.844630    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:47:05.865494    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:47:05.865504    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:47:07.396516    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:08.392520    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:12.398726    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:12.398832    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:47:12.409612    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:47:12.409690    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:47:12.420897    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:47:12.420976    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:47:12.431924    9654 logs.go:276] 4 containers: [cd63449895f2 363d73659586 89d305451507 238b4c800085]
	I0408 04:47:12.432000    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:47:12.443019    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:47:12.443084    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:47:12.453198    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:47:12.453263    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:47:12.463531    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:47:12.463593    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:47:12.473895    9654 logs.go:276] 0 containers: []
	W0408 04:47:12.473910    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:47:12.473978    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:47:12.484228    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:47:12.484244    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:47:12.484250    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:47:12.524134    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:47:12.524145    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:47:12.538493    9654 logs.go:123] Gathering logs for coredns [cd63449895f2] ...
	I0408 04:47:12.538503    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd63449895f2"
	I0408 04:47:12.550854    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:47:12.550864    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:47:12.565946    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:47:12.565956    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:47:12.578162    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:47:12.578172    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:47:12.596616    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:47:12.596708    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:47:12.612914    9654 logs.go:123] Gathering logs for coredns [363d73659586] ...
	I0408 04:47:12.612921    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363d73659586"
	I0408 04:47:12.625897    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:47:12.625907    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:47:12.643220    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:47:12.643232    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:47:12.655145    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:47:12.655159    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:47:12.659390    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:47:12.659396    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:47:12.672834    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:47:12.672845    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:47:12.684959    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:47:12.684970    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:47:12.704419    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:47:12.704430    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:47:12.728057    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:47:12.728066    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:47:12.748289    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:47:12.748299    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:47:12.748326    9654 out.go:239] X Problems detected in kubelet:
	W0408 04:47:12.748331    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:47:12.748334    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:47:12.748338    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:47:12.748341    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:47:13.394966    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:13.395169    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:47:13.411422    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:47:13.411510    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:47:13.423752    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:47:13.423841    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:47:13.437688    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:47:13.437760    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:47:13.448154    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:47:13.448222    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:47:13.458518    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:47:13.458594    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:47:13.470293    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:47:13.470366    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:47:13.480361    9805 logs.go:276] 0 containers: []
	W0408 04:47:13.480374    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:47:13.480436    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:47:13.491156    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:47:13.491174    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:47:13.491179    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:47:13.504827    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:47:13.504839    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:47:13.518944    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:47:13.518957    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:47:13.523730    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:47:13.523739    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:47:13.538348    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:47:13.538358    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:47:13.549542    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:47:13.549552    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:47:13.566205    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:47:13.566215    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:47:13.579764    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:47:13.579775    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:47:13.594383    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:47:13.594395    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:47:13.608062    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:47:13.608076    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:47:13.643592    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:47:13.643604    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:47:13.668423    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:47:13.668432    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:47:13.682774    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:47:13.682788    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:47:13.697106    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:47:13.697120    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:47:13.720005    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:47:13.720014    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:47:13.758404    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:47:13.758415    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:47:13.772147    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:47:13.772158    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:47:16.306386    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:21.308643    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:21.308836    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:47:22.752353    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:21.340318    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:47:21.340383    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:47:21.352002    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:47:21.352077    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:47:21.362894    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:47:21.362967    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:47:21.373048    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:47:21.373121    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:47:21.383943    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:47:21.384012    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:47:21.398424    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:47:21.398492    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:47:21.408565    9805 logs.go:276] 0 containers: []
	W0408 04:47:21.408577    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:47:21.408634    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:47:21.421536    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:47:21.421555    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:47:21.421561    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:47:21.435098    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:47:21.435111    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:47:21.447165    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:47:21.447178    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:47:21.458224    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:47:21.458235    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:47:21.472494    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:47:21.472503    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:47:21.497554    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:47:21.497565    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:47:21.512105    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:47:21.512116    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:47:21.529715    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:47:21.529725    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:47:21.565802    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:47:21.565814    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:47:21.585881    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:47:21.585894    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:47:21.598101    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:47:21.598116    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:47:21.602343    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:47:21.602352    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:47:21.613899    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:47:21.613910    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:47:21.635398    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:47:21.635409    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:47:21.671859    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:47:21.671867    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:47:21.683044    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:47:21.683056    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:47:21.698022    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:47:21.698034    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:47:24.211437    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:27.754528    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:27.754737    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:47:27.769644    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:47:27.769730    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:47:27.781775    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:47:27.781856    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:47:27.800600    9654 logs.go:276] 4 containers: [cd63449895f2 363d73659586 89d305451507 238b4c800085]
	I0408 04:47:27.800676    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:47:27.811314    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:47:27.811387    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:47:27.821787    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:47:27.821855    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:47:27.831880    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:47:27.831955    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:47:27.842300    9654 logs.go:276] 0 containers: []
	W0408 04:47:27.842312    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:47:27.842374    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:47:27.860100    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:47:27.860117    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:47:27.860122    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:47:27.872815    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:47:27.872826    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:47:27.884904    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:47:27.884914    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:47:27.904171    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:47:27.904264    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:47:27.921048    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:47:27.921063    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:47:27.926264    9654 logs.go:123] Gathering logs for coredns [363d73659586] ...
	I0408 04:47:27.926272    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363d73659586"
	I0408 04:47:27.938613    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:47:27.938624    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:47:27.953885    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:47:27.953896    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:47:27.968300    9654 logs.go:123] Gathering logs for coredns [cd63449895f2] ...
	I0408 04:47:27.968315    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd63449895f2"
	I0408 04:47:27.980079    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:47:27.980089    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:47:27.991866    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:47:27.991877    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:47:28.010189    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:47:28.010201    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:47:28.049436    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:47:28.049452    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:47:28.064205    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:47:28.064218    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:47:28.076146    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:47:28.076160    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:47:28.101501    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:47:28.101511    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:47:28.113541    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:47:28.113554    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:47:28.113579    9654 out.go:239] X Problems detected in kubelet:
	W0408 04:47:28.113584    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:47:28.113597    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:47:28.113694    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:47:28.113720    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:47:29.213695    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:29.213749    9805 kubeadm.go:591] duration metric: took 4m3.881072542s to restartPrimaryControlPlane
	W0408 04:47:29.213790    9805 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 04:47:29.213812    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0408 04:47:30.244487    9805 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.030677125s)
	I0408 04:47:30.244552    9805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 04:47:30.249630    9805 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 04:47:30.252524    9805 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 04:47:30.255930    9805 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 04:47:30.255938    9805 kubeadm.go:156] found existing configuration files:
	
	I0408 04:47:30.255983    9805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/admin.conf
	I0408 04:47:30.258963    9805 kubeadm.go:162] "https://control-plane.minikube.internal:51448" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 04:47:30.259002    9805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 04:47:30.262973    9805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/kubelet.conf
	I0408 04:47:30.266379    9805 kubeadm.go:162] "https://control-plane.minikube.internal:51448" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 04:47:30.266407    9805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 04:47:30.269060    9805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/controller-manager.conf
	I0408 04:47:30.271582    9805 kubeadm.go:162] "https://control-plane.minikube.internal:51448" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 04:47:30.271602    9805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 04:47:30.274830    9805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/scheduler.conf
	I0408 04:47:30.279109    9805 kubeadm.go:162] "https://control-plane.minikube.internal:51448" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 04:47:30.279149    9805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 04:47:30.282309    9805 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 04:47:30.299871    9805 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0408 04:47:30.299962    9805 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 04:47:30.351371    9805 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 04:47:30.351425    9805 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 04:47:30.351480    9805 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 04:47:30.402035    9805 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 04:47:30.410246    9805 out.go:204]   - Generating certificates and keys ...
	I0408 04:47:30.410280    9805 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 04:47:30.410311    9805 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 04:47:30.410347    9805 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 04:47:30.410378    9805 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 04:47:30.410410    9805 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 04:47:30.410435    9805 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 04:47:30.410469    9805 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 04:47:30.410514    9805 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 04:47:30.410560    9805 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 04:47:30.410598    9805 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 04:47:30.410617    9805 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 04:47:30.410649    9805 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 04:47:30.472771    9805 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 04:47:30.568193    9805 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 04:47:30.609220    9805 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 04:47:30.675304    9805 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 04:47:30.704579    9805 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 04:47:30.704990    9805 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 04:47:30.705019    9805 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 04:47:30.772566    9805 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 04:47:30.776774    9805 out.go:204]   - Booting up control plane ...
	I0408 04:47:30.776829    9805 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 04:47:30.776916    9805 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 04:47:30.776964    9805 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 04:47:30.777013    9805 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 04:47:30.777101    9805 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 04:47:34.777108    9805 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.002789 seconds
	I0408 04:47:34.777188    9805 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 04:47:34.781652    9805 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 04:47:35.294223    9805 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 04:47:35.294447    9805 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-462000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 04:47:35.798722    9805 kubeadm.go:309] [bootstrap-token] Using token: yxvc6h.6bzi3s39gqqnpulm
	I0408 04:47:35.802619    9805 out.go:204]   - Configuring RBAC rules ...
	I0408 04:47:35.802694    9805 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 04:47:35.806390    9805 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 04:47:35.812010    9805 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 04:47:35.813007    9805 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 04:47:35.814090    9805 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 04:47:35.814997    9805 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 04:47:35.819653    9805 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 04:47:35.998602    9805 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 04:47:36.208521    9805 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 04:47:36.209183    9805 kubeadm.go:309] 
	I0408 04:47:36.209212    9805 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 04:47:36.209216    9805 kubeadm.go:309] 
	I0408 04:47:36.209257    9805 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 04:47:36.209263    9805 kubeadm.go:309] 
	I0408 04:47:36.209280    9805 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 04:47:36.209337    9805 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 04:47:36.209365    9805 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 04:47:36.209367    9805 kubeadm.go:309] 
	I0408 04:47:36.209410    9805 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 04:47:36.209417    9805 kubeadm.go:309] 
	I0408 04:47:36.209454    9805 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 04:47:36.209459    9805 kubeadm.go:309] 
	I0408 04:47:36.209498    9805 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 04:47:36.209550    9805 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 04:47:36.209588    9805 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 04:47:36.209596    9805 kubeadm.go:309] 
	I0408 04:47:36.209649    9805 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 04:47:36.209699    9805 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 04:47:36.209701    9805 kubeadm.go:309] 
	I0408 04:47:36.209766    9805 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token yxvc6h.6bzi3s39gqqnpulm \
	I0408 04:47:36.209830    9805 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:63c1082056c9546e83bc7e238ddca3361d3bc0d4a9173109edd9ba5d9e410231 \
	I0408 04:47:36.209843    9805 kubeadm.go:309] 	--control-plane 
	I0408 04:47:36.209847    9805 kubeadm.go:309] 
	I0408 04:47:36.209890    9805 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 04:47:36.209894    9805 kubeadm.go:309] 
	I0408 04:47:36.209941    9805 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token yxvc6h.6bzi3s39gqqnpulm \
	I0408 04:47:36.210000    9805 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:63c1082056c9546e83bc7e238ddca3361d3bc0d4a9173109edd9ba5d9e410231 
	I0408 04:47:36.210108    9805 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 04:47:36.210197    9805 cni.go:84] Creating CNI manager for ""
	I0408 04:47:36.210206    9805 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:47:36.214059    9805 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 04:47:36.221024    9805 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 04:47:36.223995    9805 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 04:47:36.230716    9805 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 04:47:36.230801    9805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-462000 minikube.k8s.io/updated_at=2024_04_08T04_47_36_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=stopped-upgrade-462000 minikube.k8s.io/primary=true
	I0408 04:47:36.230810    9805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 04:47:36.235793    9805 ops.go:34] apiserver oom_adj: -16
	I0408 04:47:36.262877    9805 kubeadm.go:1107] duration metric: took 32.08925ms to wait for elevateKubeSystemPrivileges
	W0408 04:47:36.268441    9805 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 04:47:36.268452    9805 kubeadm.go:393] duration metric: took 4m10.949799792s to StartCluster
	I0408 04:47:36.268463    9805 settings.go:142] acquiring lock: {Name:mkd5c8378547f472aec7259eff81e77b1454222f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:47:36.268545    9805 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:47:36.268962    9805 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/kubeconfig: {Name:mk04d6060f19666b377da34a3aa7f8b9bcbb5054 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:47:36.269183    9805 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:47:36.272852    9805 out.go:177] * Verifying Kubernetes components...
	I0408 04:47:36.269209    9805 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 04:47:36.269261    9805 config.go:182] Loaded profile config "stopped-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 04:47:36.280092    9805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 04:47:36.280093    9805 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-462000"
	I0408 04:47:36.280121    9805 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-462000"
	W0408 04:47:36.280125    9805 addons.go:243] addon storage-provisioner should already be in state true
	I0408 04:47:36.280096    9805 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-462000"
	I0408 04:47:36.280139    9805 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-462000"
	I0408 04:47:36.280144    9805 host.go:66] Checking if "stopped-upgrade-462000" exists ...
	I0408 04:47:36.280585    9805 retry.go:31] will retry after 1.205678631s: connect: dial unix /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/stopped-upgrade-462000/monitor: connect: connection refused
	I0408 04:47:36.285027    9805 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 04:47:36.289092    9805 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 04:47:36.289100    9805 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 04:47:36.289110    9805 sshutil.go:53] new ssh client: &{IP:localhost Port:51414 SSHKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/stopped-upgrade-462000/id_rsa Username:docker}
	I0408 04:47:38.116268    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:36.379292    9805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 04:47:36.384771    9805 api_server.go:52] waiting for apiserver process to appear ...
	I0408 04:47:36.384810    9805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 04:47:36.388602    9805 api_server.go:72] duration metric: took 119.409208ms to wait for apiserver process to appear ...
	I0408 04:47:36.388611    9805 api_server.go:88] waiting for apiserver healthz status ...
	I0408 04:47:36.388618    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:36.463765    9805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 04:47:37.489307    9805 kapi.go:59] client config for stopped-upgrade-462000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/client.key", CAFile:"/Users/jenkins/minikube-integration/18588-7343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1039f7940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0408 04:47:37.489449    9805 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-462000"
	W0408 04:47:37.489454    9805 addons.go:243] addon default-storageclass should already be in state true
	I0408 04:47:37.489467    9805 host.go:66] Checking if "stopped-upgrade-462000" exists ...
	I0408 04:47:37.490185    9805 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 04:47:37.490191    9805 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 04:47:37.490197    9805 sshutil.go:53] new ssh client: &{IP:localhost Port:51414 SSHKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/stopped-upgrade-462000/id_rsa Username:docker}
	I0408 04:47:37.529665    9805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 04:47:43.118540    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:43.118921    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:47:43.161860    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:47:43.161999    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:47:43.182742    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:47:43.182843    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:47:43.198343    9654 logs.go:276] 4 containers: [cd63449895f2 363d73659586 89d305451507 238b4c800085]
	I0408 04:47:43.198427    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:47:43.210720    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:47:43.210796    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:47:43.222511    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:47:43.222581    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:47:43.233472    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:47:43.233539    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:47:43.244814    9654 logs.go:276] 0 containers: []
	W0408 04:47:43.244825    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:47:43.244881    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:47:43.255212    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:47:43.255228    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:47:43.255234    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:47:43.259847    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:47:43.259854    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:47:43.294623    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:47:43.294635    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:47:43.308833    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:47:43.308846    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:47:43.321513    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:47:43.321526    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:47:43.333704    9654 logs.go:123] Gathering logs for coredns [363d73659586] ...
	I0408 04:47:43.333714    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363d73659586"
	I0408 04:47:43.345487    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:47:43.345496    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:47:43.367034    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:47:43.367048    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:47:43.378958    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:47:43.378970    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:47:43.399583    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:47:43.399678    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:47:43.415668    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:47:43.415676    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:47:43.430028    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:47:43.430040    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:47:43.443815    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:47:43.443829    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:47:43.460967    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:47:43.460978    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:47:43.485934    9654 logs.go:123] Gathering logs for coredns [cd63449895f2] ...
	I0408 04:47:43.485946    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd63449895f2"
	I0408 04:47:43.498171    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:47:43.498182    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:47:43.510012    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:47:43.510022    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:47:43.510052    9654 out.go:239] X Problems detected in kubelet:
	W0408 04:47:43.510060    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:47:43.510065    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:47:43.510069    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:47:43.510072    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:47:41.390708    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:41.390781    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:46.391180    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:46.391213    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:53.514156    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:51.391550    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:51.391572    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:58.516392    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:58.516605    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:47:58.541580    9654 logs.go:276] 1 containers: [c96920991837]
	I0408 04:47:58.541680    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:47:58.556243    9654 logs.go:276] 1 containers: [cd99db3352bb]
	I0408 04:47:58.556327    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:47:58.568274    9654 logs.go:276] 4 containers: [cd63449895f2 363d73659586 89d305451507 238b4c800085]
	I0408 04:47:58.568356    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:47:58.579426    9654 logs.go:276] 1 containers: [01bf4dd0af38]
	I0408 04:47:58.579524    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:47:58.590192    9654 logs.go:276] 1 containers: [3f2f8bb48711]
	I0408 04:47:58.590267    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:47:58.600976    9654 logs.go:276] 1 containers: [5e8637540688]
	I0408 04:47:58.601070    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:47:58.622955    9654 logs.go:276] 0 containers: []
	W0408 04:47:58.622967    9654 logs.go:278] No container was found matching "kindnet"
	I0408 04:47:58.623029    9654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:47:58.645410    9654 logs.go:276] 1 containers: [483668667a12]
	I0408 04:47:58.645428    9654 logs.go:123] Gathering logs for dmesg ...
	I0408 04:47:58.645434    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:47:58.650214    9654 logs.go:123] Gathering logs for coredns [238b4c800085] ...
	I0408 04:47:58.650222    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 238b4c800085"
	I0408 04:47:58.662188    9654 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:47:58.662200    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:47:58.698984    9654 logs.go:123] Gathering logs for kube-apiserver [c96920991837] ...
	I0408 04:47:58.698996    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96920991837"
	I0408 04:47:58.713605    9654 logs.go:123] Gathering logs for coredns [cd63449895f2] ...
	I0408 04:47:58.713615    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd63449895f2"
	I0408 04:47:58.725741    9654 logs.go:123] Gathering logs for coredns [363d73659586] ...
	I0408 04:47:58.725754    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 363d73659586"
	I0408 04:47:58.741761    9654 logs.go:123] Gathering logs for coredns [89d305451507] ...
	I0408 04:47:58.741772    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89d305451507"
	I0408 04:47:58.753740    9654 logs.go:123] Gathering logs for kube-scheduler [01bf4dd0af38] ...
	I0408 04:47:58.753752    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01bf4dd0af38"
	I0408 04:47:58.768305    9654 logs.go:123] Gathering logs for kube-proxy [3f2f8bb48711] ...
	I0408 04:47:58.768316    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f2f8bb48711"
	I0408 04:47:58.779822    9654 logs.go:123] Gathering logs for storage-provisioner [483668667a12] ...
	I0408 04:47:58.779833    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 483668667a12"
	I0408 04:47:58.794901    9654 logs.go:123] Gathering logs for Docker ...
	I0408 04:47:58.794914    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:47:58.817507    9654 logs.go:123] Gathering logs for container status ...
	I0408 04:47:58.817514    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:47:58.829423    9654 logs.go:123] Gathering logs for kubelet ...
	I0408 04:47:58.829435    9654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:47:58.848460    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:47:58.848551    9654 logs.go:138] Found kubelet problem: Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:47:58.864543    9654 logs.go:123] Gathering logs for etcd [cd99db3352bb] ...
	I0408 04:47:58.864550    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd99db3352bb"
	I0408 04:47:58.879039    9654 logs.go:123] Gathering logs for kube-controller-manager [5e8637540688] ...
	I0408 04:47:58.879049    9654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e8637540688"
	I0408 04:47:58.896328    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:47:58.896339    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:47:58.896365    9654 out.go:239] X Problems detected in kubelet:
	W0408 04:47:58.896370    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: W0408 11:40:14.377039    3983 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	W0408 04:47:58.896388    9654 out.go:239]   Apr 08 11:40:14 running-upgrade-835000 kubelet[3983]: E0408 11:40:14.377057    3983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-835000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-835000' and this object
	I0408 04:47:58.896395    9654 out.go:304] Setting ErrFile to fd 2...
	I0408 04:47:58.896444    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:47:56.391980    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:56.392028    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:48:01.392516    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:48:01.392556    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:48:06.393386    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:48:06.393411    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0408 04:48:07.594580    9805 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0408 04:48:07.597828    9805 out.go:177] * Enabled addons: storage-provisioner
	I0408 04:48:08.900435    9654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:48:07.604746    9805 addons.go:505] duration metric: took 31.335975292s for enable addons: enabled=[storage-provisioner]
	I0408 04:48:13.902693    9654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:48:13.908377    9654 out.go:177] 
	W0408 04:48:13.913261    9654 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0408 04:48:13.913271    9654 out.go:239] * 
	W0408 04:48:13.913942    9654 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:48:13.924208    9654 out.go:177] 
	I0408 04:48:11.394390    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:48:11.394426    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:48:16.395736    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:48:16.395770    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:48:21.397350    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:48:21.397371    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-04-08 11:39:07 UTC, ends at Mon 2024-04-08 11:48:29 UTC. --
	Apr 08 11:48:10 running-upgrade-835000 dockerd[3230]: time="2024-04-08T11:48:10.256757316Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f384bef8d662dabba9ad0b633cab6085daeda91ee0a21c3adcb30aea0d3d6615 pid=16043 runtime=io.containerd.runc.v2
	Apr 08 11:48:10 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:10Z" level=error msg="ContainerStats resp: {0x40005d00c0 linux}"
	Apr 08 11:48:10 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:10Z" level=error msg="ContainerStats resp: {0x400043f540 linux}"
	Apr 08 11:48:11 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:11Z" level=error msg="ContainerStats resp: {0x4000996c40 linux}"
	Apr 08 11:48:12 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:12Z" level=error msg="ContainerStats resp: {0x40008be940 linux}"
	Apr 08 11:48:12 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:12Z" level=error msg="ContainerStats resp: {0x40008bed00 linux}"
	Apr 08 11:48:12 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:12Z" level=error msg="ContainerStats resp: {0x4000997b80 linux}"
	Apr 08 11:48:12 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:12Z" level=error msg="ContainerStats resp: {0x4000997cc0 linux}"
	Apr 08 11:48:12 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:12Z" level=error msg="ContainerStats resp: {0x400084c200 linux}"
	Apr 08 11:48:12 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:12Z" level=error msg="ContainerStats resp: {0x40006b4580 linux}"
	Apr 08 11:48:12 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:12Z" level=error msg="ContainerStats resp: {0x40006b4a40 linux}"
	Apr 08 11:48:14 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:14Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Apr 08 11:48:19 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:19Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Apr 08 11:48:22 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:22Z" level=error msg="ContainerStats resp: {0x40004f5300 linux}"
	Apr 08 11:48:22 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:22Z" level=error msg="ContainerStats resp: {0x40004f5e40 linux}"
	Apr 08 11:48:23 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:23Z" level=error msg="ContainerStats resp: {0x40008bec00 linux}"
	Apr 08 11:48:24 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:24Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Apr 08 11:48:24 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:24Z" level=error msg="ContainerStats resp: {0x40008bfcc0 linux}"
	Apr 08 11:48:24 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:24Z" level=error msg="ContainerStats resp: {0x400084d280 linux}"
	Apr 08 11:48:24 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:24Z" level=error msg="ContainerStats resp: {0x400084d500 linux}"
	Apr 08 11:48:24 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:24Z" level=error msg="ContainerStats resp: {0x400084dd80 linux}"
	Apr 08 11:48:24 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:24Z" level=error msg="ContainerStats resp: {0x400035a240 linux}"
	Apr 08 11:48:24 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:24Z" level=error msg="ContainerStats resp: {0x400035a800 linux}"
	Apr 08 11:48:24 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:24Z" level=error msg="ContainerStats resp: {0x40008cc800 linux}"
	Apr 08 11:48:29 running-upgrade-835000 cri-dockerd[3073]: time="2024-04-08T11:48:29Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	f384bef8d662d       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   70f172fc561ba
	dd28c769dea96       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   433f404a89b32
	cd63449895f2e       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   433f404a89b32
	363d736595862       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   70f172fc561ba
	3f2f8bb48711c       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   1386dac495019
	483668667a129       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   99944bfd3b6a7
	01bf4dd0af388       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   413646860cece
	c96920991837b       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   83d97f70b04a8
	5e86375406887       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   4b9a527ad6383
	cd99db3352bb5       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   f406ac82aa168
	
	
	==> coredns [363d73659586] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2206202793766711438.5977852564046359289. HINFO: read udp 10.244.0.2:44076->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2206202793766711438.5977852564046359289. HINFO: read udp 10.244.0.2:47306->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2206202793766711438.5977852564046359289. HINFO: read udp 10.244.0.2:36019->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2206202793766711438.5977852564046359289. HINFO: read udp 10.244.0.2:51182->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2206202793766711438.5977852564046359289. HINFO: read udp 10.244.0.2:53174->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2206202793766711438.5977852564046359289. HINFO: read udp 10.244.0.2:38798->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2206202793766711438.5977852564046359289. HINFO: read udp 10.244.0.2:41322->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2206202793766711438.5977852564046359289. HINFO: read udp 10.244.0.2:34720->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2206202793766711438.5977852564046359289. HINFO: read udp 10.244.0.2:50378->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2206202793766711438.5977852564046359289. HINFO: read udp 10.244.0.2:59847->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cd63449895f2] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3868389962419138853.8858605564362900518. HINFO: read udp 10.244.0.3:37954->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3868389962419138853.8858605564362900518. HINFO: read udp 10.244.0.3:45756->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3868389962419138853.8858605564362900518. HINFO: read udp 10.244.0.3:47717->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3868389962419138853.8858605564362900518. HINFO: read udp 10.244.0.3:40095->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3868389962419138853.8858605564362900518. HINFO: read udp 10.244.0.3:37174->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3868389962419138853.8858605564362900518. HINFO: read udp 10.244.0.3:51827->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3868389962419138853.8858605564362900518. HINFO: read udp 10.244.0.3:52201->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3868389962419138853.8858605564362900518. HINFO: read udp 10.244.0.3:48049->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3868389962419138853.8858605564362900518. HINFO: read udp 10.244.0.3:40857->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3868389962419138853.8858605564362900518. HINFO: read udp 10.244.0.3:46945->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [dd28c769dea9] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 9154355075702012061.6141322558308988286. HINFO: read udp 10.244.0.3:37444->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9154355075702012061.6141322558308988286. HINFO: read udp 10.244.0.3:44584->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9154355075702012061.6141322558308988286. HINFO: read udp 10.244.0.3:51754->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9154355075702012061.6141322558308988286. HINFO: read udp 10.244.0.3:48608->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9154355075702012061.6141322558308988286. HINFO: read udp 10.244.0.3:55378->10.0.2.3:53: i/o timeout
	
	
	==> coredns [f384bef8d662] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2323587521454443734.3903494121745296878. HINFO: read udp 10.244.0.2:43649->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2323587521454443734.3903494121745296878. HINFO: read udp 10.244.0.2:35606->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2323587521454443734.3903494121745296878. HINFO: read udp 10.244.0.2:46758->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2323587521454443734.3903494121745296878. HINFO: read udp 10.244.0.2:40967->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2323587521454443734.3903494121745296878. HINFO: read udp 10.244.0.2:60198->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-835000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-835000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=running-upgrade-835000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_08T04_44_09_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 11:44:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-835000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 11:48:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 11:44:09 +0000   Mon, 08 Apr 2024 11:44:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 11:44:09 +0000   Mon, 08 Apr 2024 11:44:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 11:44:09 +0000   Mon, 08 Apr 2024 11:44:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 11:44:09 +0000   Mon, 08 Apr 2024 11:44:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-835000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 7752155c0c8e4a0fac1e1e1c7dd5ddff
	  System UUID:                7752155c0c8e4a0fac1e1e1c7dd5ddff
	  Boot ID:                    1853a78b-6a97-45b1-b1c2-d5bbbeda68e4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-8t92f                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m8s
	  kube-system                 coredns-6d4b75cb6d-jhzff                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m8s
	  kube-system                 etcd-running-upgrade-835000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-apiserver-running-upgrade-835000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-controller-manager-running-upgrade-835000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 kube-proxy-gtn5m                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-scheduler-running-upgrade-835000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  Starting                 4m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m27s (x3 over 4m27s)  kubelet          Node running-upgrade-835000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s (x3 over 4m27s)  kubelet          Node running-upgrade-835000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s (x2 over 4m27s)  kubelet          Node running-upgrade-835000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m21s                  kubelet          Node running-upgrade-835000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s                  kubelet          Node running-upgrade-835000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s                  kubelet          Node running-upgrade-835000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m21s                  kubelet          Node running-upgrade-835000 status is now: NodeReady
	  Normal  RegisteredNode           4m9s                   node-controller  Node running-upgrade-835000 event: Registered Node running-upgrade-835000 in Controller
	
	
	==> dmesg <==
	[  +1.704070] systemd-fstab-generator[875]: Ignoring "noauto" for root device
	[  +0.083514] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[  +0.087759] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +1.143418] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.073252] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.078841] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +2.244749] systemd-fstab-generator[1287]: Ignoring "noauto" for root device
	[  +9.689279] systemd-fstab-generator[1946]: Ignoring "noauto" for root device
	[  +2.608699] systemd-fstab-generator[2222]: Ignoring "noauto" for root device
	[  +0.148497] systemd-fstab-generator[2255]: Ignoring "noauto" for root device
	[  +0.101476] systemd-fstab-generator[2269]: Ignoring "noauto" for root device
	[  +0.088400] systemd-fstab-generator[2284]: Ignoring "noauto" for root device
	[ +12.565326] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.203859] systemd-fstab-generator[3028]: Ignoring "noauto" for root device
	[  +0.086402] systemd-fstab-generator[3041]: Ignoring "noauto" for root device
	[  +0.080683] systemd-fstab-generator[3052]: Ignoring "noauto" for root device
	[  +0.090314] systemd-fstab-generator[3066]: Ignoring "noauto" for root device
	[  +2.369186] systemd-fstab-generator[3217]: Ignoring "noauto" for root device
	[  +3.182952] systemd-fstab-generator[3613]: Ignoring "noauto" for root device
	[  +1.358030] systemd-fstab-generator[3977]: Ignoring "noauto" for root device
	[Apr 8 11:40] kauditd_printk_skb: 68 callbacks suppressed
	[Apr 8 11:44] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.382242] systemd-fstab-generator[10574]: Ignoring "noauto" for root device
	[  +5.648813] systemd-fstab-generator[11167]: Ignoring "noauto" for root device
	[  +0.471336] systemd-fstab-generator[11315]: Ignoring "noauto" for root device
	
	
	==> etcd [cd99db3352bb] <==
	{"level":"info","ts":"2024-04-08T11:44:04.484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-04-08T11:44:04.484Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-04-08T11:44:04.485Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-08T11:44:04.487Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-08T11:44:04.487Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-08T11:44:04.487Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-04-08T11:44:04.487Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-04-08T11:44:05.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-08T11:44:05.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-08T11:44:05.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-04-08T11:44:05.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-04-08T11:44:05.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-04-08T11:44:05.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-04-08T11:44:05.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-04-08T11:44:05.374Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-835000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-08T11:44:05.375Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T11:44:05.375Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-08T11:44:05.375Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T11:44:05.375Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T11:44:05.375Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T11:44:05.375Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-08T11:44:05.376Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-08T11:44:05.376Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-08T11:44:05.376Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-08T11:44:05.376Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	
	
	==> kernel <==
	 11:48:30 up 9 min,  0 users,  load average: 0.28, 0.18, 0.10
	Linux running-upgrade-835000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [c96920991837] <==
	I0408 11:44:06.529335       1 controller.go:611] quota admission added evaluator for: namespaces
	I0408 11:44:06.573304       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0408 11:44:06.573916       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0408 11:44:06.592410       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0408 11:44:06.592527       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0408 11:44:06.592533       1 cache.go:39] Caches are synced for autoregister controller
	I0408 11:44:06.598588       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0408 11:44:07.322371       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0408 11:44:07.483803       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0408 11:44:07.488374       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0408 11:44:07.488401       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0408 11:44:07.653242       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0408 11:44:07.662789       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0408 11:44:07.733757       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0408 11:44:07.735818       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0408 11:44:07.736215       1 controller.go:611] quota admission added evaluator for: endpoints
	I0408 11:44:07.737557       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0408 11:44:08.617011       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0408 11:44:09.172261       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0408 11:44:09.180081       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0408 11:44:09.187202       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0408 11:44:09.227008       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0408 11:44:22.027247       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0408 11:44:22.222527       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0408 11:44:23.631841       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [5e8637540688] <==
	I0408 11:44:22.030415       1 range_allocator.go:173] Starting range CIDR allocator
	I0408 11:44:22.030445       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0408 11:44:22.030462       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0408 11:44:22.030590       1 shared_informer.go:262] Caches are synced for PVC protection
	I0408 11:44:22.032423       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0408 11:44:22.034897       1 shared_informer.go:262] Caches are synced for ephemeral
	I0408 11:44:22.035957       1 range_allocator.go:374] Set node running-upgrade-835000 PodCIDR to [10.244.0.0/24]
	I0408 11:44:22.038503       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0408 11:44:22.047386       1 shared_informer.go:262] Caches are synced for daemon sets
	I0408 11:44:22.073203       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-8t92f"
	I0408 11:44:22.075206       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-jhzff"
	I0408 11:44:22.117111       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0408 11:44:22.123066       1 shared_informer.go:262] Caches are synced for persistent volume
	I0408 11:44:22.124901       1 shared_informer.go:262] Caches are synced for expand
	I0408 11:44:22.133518       1 shared_informer.go:262] Caches are synced for PV protection
	I0408 11:44:22.145275       1 shared_informer.go:262] Caches are synced for attach detach
	I0408 11:44:22.168077       1 shared_informer.go:262] Caches are synced for crt configmap
	I0408 11:44:22.227934       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gtn5m"
	I0408 11:44:22.241705       1 shared_informer.go:262] Caches are synced for namespace
	I0408 11:44:22.243853       1 shared_informer.go:262] Caches are synced for resource quota
	I0408 11:44:22.246981       1 shared_informer.go:262] Caches are synced for resource quota
	I0408 11:44:22.267808       1 shared_informer.go:262] Caches are synced for service account
	I0408 11:44:22.650573       1 shared_informer.go:262] Caches are synced for garbage collector
	I0408 11:44:22.650675       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0408 11:44:22.658958       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [3f2f8bb48711] <==
	I0408 11:44:23.620587       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0408 11:44:23.620610       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0408 11:44:23.620630       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0408 11:44:23.630216       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0408 11:44:23.630228       1 server_others.go:206] "Using iptables Proxier"
	I0408 11:44:23.630240       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0408 11:44:23.630326       1 server.go:661] "Version info" version="v1.24.1"
	I0408 11:44:23.630330       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 11:44:23.630574       1 config.go:317] "Starting service config controller"
	I0408 11:44:23.630586       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0408 11:44:23.630595       1 config.go:226] "Starting endpoint slice config controller"
	I0408 11:44:23.630597       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0408 11:44:23.630872       1 config.go:444] "Starting node config controller"
	I0408 11:44:23.630875       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0408 11:44:23.731424       1 shared_informer.go:262] Caches are synced for node config
	I0408 11:44:23.731440       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0408 11:44:23.731450       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [01bf4dd0af38] <==
	W0408 11:44:06.532462       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0408 11:44:06.532484       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0408 11:44:06.532511       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0408 11:44:06.532541       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0408 11:44:06.532568       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0408 11:44:06.532587       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0408 11:44:06.532624       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0408 11:44:06.532643       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0408 11:44:06.532667       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0408 11:44:06.532687       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0408 11:44:06.532966       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0408 11:44:06.533013       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0408 11:44:07.372952       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0408 11:44:07.373033       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0408 11:44:07.383933       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0408 11:44:07.383990       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0408 11:44:07.393063       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0408 11:44:07.393121       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0408 11:44:07.577125       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0408 11:44:07.577213       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0408 11:44:07.589924       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0408 11:44:07.589938       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0408 11:44:07.595711       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0408 11:44:07.595770       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0408 11:44:10.327934       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-04-08 11:39:07 UTC, ends at Mon 2024-04-08 11:48:30 UTC. --
	Apr 08 11:44:11 running-upgrade-835000 kubelet[11190]: E0408 11:44:11.202573   11190 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-835000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-835000"
	Apr 08 11:44:11 running-upgrade-835000 kubelet[11190]: I0408 11:44:11.401000   11190 request.go:601] Waited for 1.112602348s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Apr 08 11:44:11 running-upgrade-835000 kubelet[11190]: E0408 11:44:11.403882   11190 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-835000\" already exists" pod="kube-system/etcd-running-upgrade-835000"
	Apr 08 11:44:21 running-upgrade-835000 kubelet[11190]: I0408 11:44:21.975558   11190 topology_manager.go:200] "Topology Admit Handler"
	Apr 08 11:44:22 running-upgrade-835000 kubelet[11190]: I0408 11:44:22.080710   11190 topology_manager.go:200] "Topology Admit Handler"
	Apr 08 11:44:22 running-upgrade-835000 kubelet[11190]: I0408 11:44:22.082872   11190 topology_manager.go:200] "Topology Admit Handler"
	Apr 08 11:44:22 running-upgrade-835000 kubelet[11190]: I0408 11:44:22.118729   11190 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8ea2e210-a668-4401-bbff-76e78555e6b0-tmp\") pod \"storage-provisioner\" (UID: \"8ea2e210-a668-4401-bbff-76e78555e6b0\") " pod="kube-system/storage-provisioner"
	Apr 08 11:44:22 running-upgrade-835000 kubelet[11190]: I0408 11:44:22.118754   11190 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t7ls\" (UniqueName: \"kubernetes.io/projected/8ea2e210-a668-4401-bbff-76e78555e6b0-kube-api-access-8t7ls\") pod \"storage-provisioner\" (UID: \"8ea2e210-a668-4401-bbff-76e78555e6b0\") " pod="kube-system/storage-provisioner"
	Apr 08 11:44:22 running-upgrade-835000 kubelet[11190]: I0408 11:44:22.118799   11190 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 08 11:44:22 running-upgrade-835000 kubelet[11190]: I0408 11:44:22.119075   11190 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 08 11:44:22 running-upgrade-835000 kubelet[11190]: I0408 11:44:22.219643   11190 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eb7580e-7f15-4ba3-92b8-617fecb09704-config-volume\") pod \"coredns-6d4b75cb6d-8t92f\" (UID: \"1eb7580e-7f15-4ba3-92b8-617fecb09704\") " pod="kube-system/coredns-6d4b75cb6d-8t92f"
	Apr 08 11:44:22 running-upgrade-835000 kubelet[11190]: I0408 11:44:22.219690   11190 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f44a4375-394f-426a-a2c4-c441f3bc580a-config-volume\") pod \"coredns-6d4b75cb6d-jhzff\" (UID: \"f44a4375-394f-426a-a2c4-c441f3bc580a\") " pod="kube-system/coredns-6d4b75cb6d-jhzff"
	Apr 08 11:44:22 running-upgrade-835000 kubelet[11190]: I0408 11:44:22.219704   11190 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9n7s\" (UniqueName: \"kubernetes.io/projected/1eb7580e-7f15-4ba3-92b8-617fecb09704-kube-api-access-p9n7s\") pod \"coredns-6d4b75cb6d-8t92f\" (UID: \"1eb7580e-7f15-4ba3-92b8-617fecb09704\") " pod="kube-system/coredns-6d4b75cb6d-8t92f"
	Apr 08 11:44:22 running-upgrade-835000 kubelet[11190]: I0408 11:44:22.219716   11190 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz65w\" (UniqueName: \"kubernetes.io/projected/f44a4375-394f-426a-a2c4-c441f3bc580a-kube-api-access-rz65w\") pod \"coredns-6d4b75cb6d-jhzff\" (UID: \"f44a4375-394f-426a-a2c4-c441f3bc580a\") " pod="kube-system/coredns-6d4b75cb6d-jhzff"
	Apr 08 11:44:22 running-upgrade-835000 kubelet[11190]: E0408 11:44:22.224612   11190 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Apr 08 11:44:22 running-upgrade-835000 kubelet[11190]: E0408 11:44:22.224624   11190 projected.go:192] Error preparing data for projected volume kube-api-access-8t7ls for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Apr 08 11:44:22 running-upgrade-835000 kubelet[11190]: E0408 11:44:22.224661   11190 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/8ea2e210-a668-4401-bbff-76e78555e6b0-kube-api-access-8t7ls podName:8ea2e210-a668-4401-bbff-76e78555e6b0 nodeName:}" failed. No retries permitted until 2024-04-08 11:44:22.724648884 +0000 UTC m=+13.565696601 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8t7ls" (UniqueName: "kubernetes.io/projected/8ea2e210-a668-4401-bbff-76e78555e6b0-kube-api-access-8t7ls") pod "storage-provisioner" (UID: "8ea2e210-a668-4401-bbff-76e78555e6b0") : configmap "kube-root-ca.crt" not found
	Apr 08 11:44:22 running-upgrade-835000 kubelet[11190]: I0408 11:44:22.227872   11190 topology_manager.go:200] "Topology Admit Handler"
	Apr 08 11:44:22 running-upgrade-835000 kubelet[11190]: I0408 11:44:22.421252   11190 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13eaf750-6d22-41ab-83f1-7559c4aea314-xtables-lock\") pod \"kube-proxy-gtn5m\" (UID: \"13eaf750-6d22-41ab-83f1-7559c4aea314\") " pod="kube-system/kube-proxy-gtn5m"
	Apr 08 11:44:22 running-upgrade-835000 kubelet[11190]: I0408 11:44:22.421305   11190 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/13eaf750-6d22-41ab-83f1-7559c4aea314-kube-proxy\") pod \"kube-proxy-gtn5m\" (UID: \"13eaf750-6d22-41ab-83f1-7559c4aea314\") " pod="kube-system/kube-proxy-gtn5m"
	Apr 08 11:44:22 running-upgrade-835000 kubelet[11190]: I0408 11:44:22.421318   11190 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13eaf750-6d22-41ab-83f1-7559c4aea314-lib-modules\") pod \"kube-proxy-gtn5m\" (UID: \"13eaf750-6d22-41ab-83f1-7559c4aea314\") " pod="kube-system/kube-proxy-gtn5m"
	Apr 08 11:44:22 running-upgrade-835000 kubelet[11190]: I0408 11:44:22.421338   11190 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29297\" (UniqueName: \"kubernetes.io/projected/13eaf750-6d22-41ab-83f1-7559c4aea314-kube-api-access-29297\") pod \"kube-proxy-gtn5m\" (UID: \"13eaf750-6d22-41ab-83f1-7559c4aea314\") " pod="kube-system/kube-proxy-gtn5m"
	Apr 08 11:44:23 running-upgrade-835000 kubelet[11190]: I0408 11:44:23.455390   11190 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="433f404a89b320cb35a42e6c58a32c5e29dbe39719a1df9a07bb50d5a278ce35"
	Apr 08 11:48:10 running-upgrade-835000 kubelet[11190]: I0408 11:48:10.601644   11190 scope.go:110] "RemoveContainer" containerID="89d305451507209d90b4e2cbd546387c2897b76ccffbebd38eb17d6b61d28b92"
	Apr 08 11:48:10 running-upgrade-835000 kubelet[11190]: I0408 11:48:10.618113   11190 scope.go:110] "RemoveContainer" containerID="238b4c800085bdc1f138868c915a2ac3acf1ee990610a2fe0715c9db9f6fa574"
	
	
	==> storage-provisioner [483668667a12] <==
	I0408 11:44:23.148810       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0408 11:44:23.155036       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0408 11:44:23.155070       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0408 11:44:23.159506       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0408 11:44:23.159667       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"daf01068-65cf-4fe8-9b8f-c654dd91830e", APIVersion:"v1", ResourceVersion:"370", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-835000_4d3ea995-7b92-4fc4-82c0-bc9b7f5d9326 became leader
	I0408 11:44:23.164089       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-835000_4d3ea995-7b92-4fc4-82c0-bc9b7f5d9326!
	I0408 11:44:23.264524       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-835000_4d3ea995-7b92-4fc4-82c0-bc9b7f5d9326!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-835000 -n running-upgrade-835000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-835000 -n running-upgrade-835000: exit status 2 (15.609749708s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-835000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-835000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-835000
--- FAIL: TestRunningBinaryUpgrade (602.47s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.93s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-305000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-305000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.837958291s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-305000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-305000" primary control-plane node in "kubernetes-upgrade-305000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-305000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:41:43.959864    9732 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:41:43.960000    9732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:41:43.960003    9732 out.go:304] Setting ErrFile to fd 2...
	I0408 04:41:43.960006    9732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:41:43.960134    9732 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:41:43.961195    9732 out.go:298] Setting JSON to false
	I0408 04:41:43.977822    9732 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6072,"bootTime":1712570431,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:41:43.977894    9732 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:41:43.984318    9732 out.go:177] * [kubernetes-upgrade-305000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:41:43.992400    9732 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:41:43.997334    9732 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:41:43.992434    9732 notify.go:220] Checking for updates...
	I0408 04:41:44.003309    9732 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:41:44.006333    9732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:41:44.009262    9732 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:41:44.012339    9732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:41:44.015648    9732 config.go:182] Loaded profile config "multinode-464000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:41:44.015708    9732 config.go:182] Loaded profile config "running-upgrade-835000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 04:41:44.015772    9732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:41:44.020311    9732 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:41:44.027261    9732 start.go:297] selected driver: qemu2
	I0408 04:41:44.027268    9732 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:41:44.027274    9732 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:41:44.029556    9732 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:41:44.032327    9732 out.go:177] * Automatically selected the socket_vmnet network
	I0408 04:41:44.035389    9732 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 04:41:44.035424    9732 cni.go:84] Creating CNI manager for ""
	I0408 04:41:44.035430    9732 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0408 04:41:44.035450    9732 start.go:340] cluster config:
	{Name:kubernetes-upgrade-305000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:41:44.039646    9732 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:41:44.047301    9732 out.go:177] * Starting "kubernetes-upgrade-305000" primary control-plane node in "kubernetes-upgrade-305000" cluster
	I0408 04:41:44.051131    9732 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0408 04:41:44.051148    9732 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0408 04:41:44.051158    9732 cache.go:56] Caching tarball of preloaded images
	I0408 04:41:44.051215    9732 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:41:44.051221    9732 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0408 04:41:44.051285    9732 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/kubernetes-upgrade-305000/config.json ...
	I0408 04:41:44.051298    9732 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/kubernetes-upgrade-305000/config.json: {Name:mk924f783e1f79db8282ddfab576d9cbf574e733 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:41:44.051513    9732 start.go:360] acquireMachinesLock for kubernetes-upgrade-305000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:41:44.051544    9732 start.go:364] duration metric: took 23.584µs to acquireMachinesLock for "kubernetes-upgrade-305000"
	I0408 04:41:44.051554    9732 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:41:44.051581    9732 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:41:44.059266    9732 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 04:41:44.084593    9732 start.go:159] libmachine.API.Create for "kubernetes-upgrade-305000" (driver="qemu2")
	I0408 04:41:44.084639    9732 client.go:168] LocalClient.Create starting
	I0408 04:41:44.084739    9732 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:41:44.084774    9732 main.go:141] libmachine: Decoding PEM data...
	I0408 04:41:44.084787    9732 main.go:141] libmachine: Parsing certificate...
	I0408 04:41:44.084825    9732 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:41:44.084850    9732 main.go:141] libmachine: Decoding PEM data...
	I0408 04:41:44.084858    9732 main.go:141] libmachine: Parsing certificate...
	I0408 04:41:44.085202    9732 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:41:44.250471    9732 main.go:141] libmachine: Creating SSH key...
	I0408 04:41:44.322752    9732 main.go:141] libmachine: Creating Disk image...
	I0408 04:41:44.322759    9732 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:41:44.322937    9732 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/disk.qcow2
	I0408 04:41:44.338647    9732 main.go:141] libmachine: STDOUT: 
	I0408 04:41:44.338675    9732 main.go:141] libmachine: STDERR: 
	I0408 04:41:44.338745    9732 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/disk.qcow2 +20000M
	I0408 04:41:44.350014    9732 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:41:44.350034    9732 main.go:141] libmachine: STDERR: 
	I0408 04:41:44.350061    9732 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/disk.qcow2
	I0408 04:41:44.350066    9732 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:41:44.350096    9732 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:55:d1:66:af:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/disk.qcow2
	I0408 04:41:44.352003    9732 main.go:141] libmachine: STDOUT: 
	I0408 04:41:44.352020    9732 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:41:44.352041    9732 client.go:171] duration metric: took 267.388166ms to LocalClient.Create
	I0408 04:41:46.354219    9732 start.go:128] duration metric: took 2.302634167s to createHost
	I0408 04:41:46.354286    9732 start.go:83] releasing machines lock for "kubernetes-upgrade-305000", held for 2.302767125s
	W0408 04:41:46.354382    9732 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:41:46.360552    9732 out.go:177] * Deleting "kubernetes-upgrade-305000" in qemu2 ...
	W0408 04:41:46.395598    9732 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:41:46.395631    9732 start.go:728] Will try again in 5 seconds ...
	I0408 04:41:51.396136    9732 start.go:360] acquireMachinesLock for kubernetes-upgrade-305000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:41:51.396297    9732 start.go:364] duration metric: took 128.209µs to acquireMachinesLock for "kubernetes-upgrade-305000"
	I0408 04:41:51.396342    9732 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:41:51.396405    9732 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:41:51.404628    9732 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 04:41:51.428258    9732 start.go:159] libmachine.API.Create for "kubernetes-upgrade-305000" (driver="qemu2")
	I0408 04:41:51.428292    9732 client.go:168] LocalClient.Create starting
	I0408 04:41:51.428359    9732 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:41:51.428399    9732 main.go:141] libmachine: Decoding PEM data...
	I0408 04:41:51.428408    9732 main.go:141] libmachine: Parsing certificate...
	I0408 04:41:51.428443    9732 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:41:51.428470    9732 main.go:141] libmachine: Decoding PEM data...
	I0408 04:41:51.428478    9732 main.go:141] libmachine: Parsing certificate...
	I0408 04:41:51.428814    9732 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:41:51.577036    9732 main.go:141] libmachine: Creating SSH key...
	I0408 04:41:51.698633    9732 main.go:141] libmachine: Creating Disk image...
	I0408 04:41:51.698647    9732 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:41:51.698838    9732 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/disk.qcow2
	I0408 04:41:51.711637    9732 main.go:141] libmachine: STDOUT: 
	I0408 04:41:51.711665    9732 main.go:141] libmachine: STDERR: 
	I0408 04:41:51.711730    9732 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/disk.qcow2 +20000M
	I0408 04:41:51.722897    9732 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:41:51.722919    9732 main.go:141] libmachine: STDERR: 
	I0408 04:41:51.722933    9732 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/disk.qcow2
	I0408 04:41:51.722961    9732 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:41:51.722995    9732 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:24:8f:85:df:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/disk.qcow2
	I0408 04:41:51.724808    9732 main.go:141] libmachine: STDOUT: 
	I0408 04:41:51.724826    9732 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:41:51.724836    9732 client.go:171] duration metric: took 296.544ms to LocalClient.Create
	I0408 04:41:53.726998    9732 start.go:128] duration metric: took 2.330600417s to createHost
	I0408 04:41:53.727057    9732 start.go:83] releasing machines lock for "kubernetes-upgrade-305000", held for 2.330780833s
	W0408 04:41:53.727488    9732 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:41:53.740093    9732 out.go:177] 
	W0408 04:41:53.744225    9732 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:41:53.744251    9732 out.go:239] * 
	* 
	W0408 04:41:53.747411    9732 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:41:53.754074    9732 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-305000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-305000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-305000: (3.638111417s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-305000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-305000 status --format={{.Host}}: exit status 7 (67.076542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-305000 --memory=2200 --kubernetes-version=v1.30.0-rc.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-305000 --memory=2200 --kubernetes-version=v1.30.0-rc.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.185081792s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-305000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-305000" primary control-plane node in "kubernetes-upgrade-305000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-305000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-305000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:41:57.507614    9768 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:41:57.507735    9768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:41:57.507739    9768 out.go:304] Setting ErrFile to fd 2...
	I0408 04:41:57.507746    9768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:41:57.507893    9768 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:41:57.508914    9768 out.go:298] Setting JSON to false
	I0408 04:41:57.526675    9768 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6086,"bootTime":1712570431,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:41:57.526744    9768 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:41:57.530817    9768 out.go:177] * [kubernetes-upgrade-305000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:41:57.538772    9768 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:41:57.538823    9768 notify.go:220] Checking for updates...
	I0408 04:41:57.545682    9768 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:41:57.548741    9768 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:41:57.551684    9768 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:41:57.554699    9768 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:41:57.557641    9768 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:41:57.560974    9768 config.go:182] Loaded profile config "kubernetes-upgrade-305000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0408 04:41:57.561249    9768 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:41:57.564547    9768 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 04:41:57.571655    9768 start.go:297] selected driver: qemu2
	I0408 04:41:57.571662    9768 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:41:57.571707    9768 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:41:57.574370    9768 cni.go:84] Creating CNI manager for ""
	I0408 04:41:57.574387    9768 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:41:57.574405    9768 start.go:340] cluster config:
	{Name:kubernetes-upgrade-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.0 ClusterName:kubernetes-upgrade-305000 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:41:57.578650    9768 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:41:57.585690    9768 out.go:177] * Starting "kubernetes-upgrade-305000" primary control-plane node in "kubernetes-upgrade-305000" cluster
	I0408 04:41:57.589657    9768 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime docker
	I0408 04:41:57.589671    9768 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0408 04:41:57.589680    9768 cache.go:56] Caching tarball of preloaded images
	I0408 04:41:57.589729    9768 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:41:57.589734    9768 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.0 on docker
	I0408 04:41:57.589778    9768 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/kubernetes-upgrade-305000/config.json ...
	I0408 04:41:57.590333    9768 start.go:360] acquireMachinesLock for kubernetes-upgrade-305000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:41:57.590360    9768 start.go:364] duration metric: took 20.208µs to acquireMachinesLock for "kubernetes-upgrade-305000"
	I0408 04:41:57.590368    9768 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:41:57.590372    9768 fix.go:54] fixHost starting: 
	I0408 04:41:57.590487    9768 fix.go:112] recreateIfNeeded on kubernetes-upgrade-305000: state=Stopped err=<nil>
	W0408 04:41:57.590497    9768 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:41:57.597735    9768 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-305000" ...
	I0408 04:41:57.601696    9768 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:24:8f:85:df:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/disk.qcow2
	I0408 04:41:57.603818    9768 main.go:141] libmachine: STDOUT: 
	I0408 04:41:57.603840    9768 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:41:57.603869    9768 fix.go:56] duration metric: took 13.49575ms for fixHost
	I0408 04:41:57.603874    9768 start.go:83] releasing machines lock for "kubernetes-upgrade-305000", held for 13.510583ms
	W0408 04:41:57.603880    9768 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:41:57.603912    9768 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:41:57.603917    9768 start.go:728] Will try again in 5 seconds ...
	I0408 04:42:02.606062    9768 start.go:360] acquireMachinesLock for kubernetes-upgrade-305000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:42:02.606536    9768 start.go:364] duration metric: took 358.459µs to acquireMachinesLock for "kubernetes-upgrade-305000"
	I0408 04:42:02.606692    9768 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:42:02.606714    9768 fix.go:54] fixHost starting: 
	I0408 04:42:02.607501    9768 fix.go:112] recreateIfNeeded on kubernetes-upgrade-305000: state=Stopped err=<nil>
	W0408 04:42:02.607527    9768 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:42:02.612988    9768 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-305000" ...
	I0408 04:42:02.619190    9768 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:24:8f:85:df:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubernetes-upgrade-305000/disk.qcow2
	I0408 04:42:02.629221    9768 main.go:141] libmachine: STDOUT: 
	I0408 04:42:02.629301    9768 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:42:02.629406    9768 fix.go:56] duration metric: took 22.692375ms for fixHost
	I0408 04:42:02.629426    9768 start.go:83] releasing machines lock for "kubernetes-upgrade-305000", held for 22.867584ms
	W0408 04:42:02.629651    9768 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-305000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-305000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:42:02.635939    9768 out.go:177] 
	W0408 04:42:02.639932    9768 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:42:02.639982    9768 out.go:239] * 
	* 
	W0408 04:42:02.642649    9768 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:42:02.646952    9768 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-305000 --memory=2200 --kubernetes-version=v1.30.0-rc.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-305000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-305000 version --output=json: exit status 1 (59.173666ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-305000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-04-08 04:42:02.721488 -0700 PDT m=+949.350711501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-305000 -n kubernetes-upgrade-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-305000 -n kubernetes-upgrade-305000: exit status 7 (34.040084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-305000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-305000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-305000
--- FAIL: TestKubernetesUpgrade (18.93s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.19s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0-beta.0 on darwin (arm64)
- MINIKUBE_LOCATION=18588
- KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3416735468/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.19s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.47s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0-beta.0 on darwin (arm64)
- MINIKUBE_LOCATION=18588
- KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3314209505/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (576.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.904334000 start -p stopped-upgrade-462000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.904334000 start -p stopped-upgrade-462000 --memory=2200 --vm-driver=qemu2 : (39.874591167s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.904334000 -p stopped-upgrade-462000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.904334000 -p stopped-upgrade-462000 stop: (12.123765417s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-462000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-462000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m44.648663s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-462000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-462000" primary control-plane node in "stopped-upgrade-462000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-462000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:42:56.319716    9805 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:42:56.319869    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:42:56.319873    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:42:56.319876    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:42:56.320027    9805 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:42:56.321175    9805 out.go:298] Setting JSON to false
	I0408 04:42:56.340427    9805 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6145,"bootTime":1712570431,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:42:56.340512    9805 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:42:56.344894    9805 out.go:177] * [stopped-upgrade-462000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:42:56.352854    9805 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:42:56.355899    9805 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:42:56.352898    9805 notify.go:220] Checking for updates...
	I0408 04:42:56.361806    9805 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:42:56.364849    9805 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:42:56.367876    9805 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:42:56.370908    9805 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:42:56.374115    9805 config.go:182] Loaded profile config "stopped-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 04:42:56.377859    9805 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0408 04:42:56.380774    9805 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:42:56.384821    9805 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 04:42:56.391861    9805 start.go:297] selected driver: qemu2
	I0408 04:42:56.391868    9805 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51448 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0408 04:42:56.391938    9805 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:42:56.394721    9805 cni.go:84] Creating CNI manager for ""
	I0408 04:42:56.394739    9805 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:42:56.394767    9805 start.go:340] cluster config:
	{Name:stopped-upgrade-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51448 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0408 04:42:56.394819    9805 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:42:56.401794    9805 out.go:177] * Starting "stopped-upgrade-462000" primary control-plane node in "stopped-upgrade-462000" cluster
	I0408 04:42:56.405777    9805 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0408 04:42:56.405807    9805 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0408 04:42:56.405819    9805 cache.go:56] Caching tarball of preloaded images
	I0408 04:42:56.405901    9805 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:42:56.405908    9805 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0408 04:42:56.405971    9805 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/config.json ...
	I0408 04:42:56.406552    9805 start.go:360] acquireMachinesLock for stopped-upgrade-462000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:42:56.406589    9805 start.go:364] duration metric: took 28.125µs to acquireMachinesLock for "stopped-upgrade-462000"
	I0408 04:42:56.406598    9805 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:42:56.406604    9805 fix.go:54] fixHost starting: 
	I0408 04:42:56.406720    9805 fix.go:112] recreateIfNeeded on stopped-upgrade-462000: state=Stopped err=<nil>
	W0408 04:42:56.406729    9805 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:42:56.414873    9805 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-462000" ...
	I0408 04:42:56.418050    9805 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/stopped-upgrade-462000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/stopped-upgrade-462000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/stopped-upgrade-462000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51414-:22,hostfwd=tcp::51415-:2376,hostname=stopped-upgrade-462000 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/stopped-upgrade-462000/disk.qcow2
	I0408 04:42:56.463095    9805 main.go:141] libmachine: STDOUT: 
	I0408 04:42:56.463129    9805 main.go:141] libmachine: STDERR: 
	I0408 04:42:56.463136    9805 main.go:141] libmachine: Waiting for VM to start (ssh -p 51414 docker@127.0.0.1)...
	I0408 04:43:16.648955    9805 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/config.json ...
	I0408 04:43:16.649676    9805 machine.go:94] provisionDockerMachine start ...
	I0408 04:43:16.649793    9805 main.go:141] libmachine: Using SSH client type: native
	I0408 04:43:16.650172    9805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102701c80] 0x1027044e0 <nil>  [] 0s} localhost 51414 <nil> <nil>}
	I0408 04:43:16.650187    9805 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 04:43:16.738907    9805 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 04:43:16.738944    9805 buildroot.go:166] provisioning hostname "stopped-upgrade-462000"
	I0408 04:43:16.739050    9805 main.go:141] libmachine: Using SSH client type: native
	I0408 04:43:16.739252    9805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102701c80] 0x1027044e0 <nil>  [] 0s} localhost 51414 <nil> <nil>}
	I0408 04:43:16.739261    9805 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-462000 && echo "stopped-upgrade-462000" | sudo tee /etc/hostname
	I0408 04:43:16.822121    9805 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-462000
	
	I0408 04:43:16.822190    9805 main.go:141] libmachine: Using SSH client type: native
	I0408 04:43:16.822335    9805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102701c80] 0x1027044e0 <nil>  [] 0s} localhost 51414 <nil> <nil>}
	I0408 04:43:16.822346    9805 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-462000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-462000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-462000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 04:43:16.901017    9805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 04:43:16.901032    9805 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18588-7343/.minikube CaCertPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18588-7343/.minikube}
	I0408 04:43:16.901042    9805 buildroot.go:174] setting up certificates
	I0408 04:43:16.901048    9805 provision.go:84] configureAuth start
	I0408 04:43:16.901053    9805 provision.go:143] copyHostCerts
	I0408 04:43:16.901125    9805 exec_runner.go:144] found /Users/jenkins/minikube-integration/18588-7343/.minikube/cert.pem, removing ...
	I0408 04:43:16.901134    9805 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18588-7343/.minikube/cert.pem
	I0408 04:43:16.901292    9805 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18588-7343/.minikube/cert.pem (1123 bytes)
	I0408 04:43:16.901537    9805 exec_runner.go:144] found /Users/jenkins/minikube-integration/18588-7343/.minikube/key.pem, removing ...
	I0408 04:43:16.901544    9805 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18588-7343/.minikube/key.pem
	I0408 04:43:16.902215    9805 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18588-7343/.minikube/key.pem (1679 bytes)
	I0408 04:43:16.902392    9805 exec_runner.go:144] found /Users/jenkins/minikube-integration/18588-7343/.minikube/ca.pem, removing ...
	I0408 04:43:16.902398    9805 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18588-7343/.minikube/ca.pem
	I0408 04:43:16.902468    9805 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18588-7343/.minikube/ca.pem (1078 bytes)
	I0408 04:43:16.902580    9805 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-462000 san=[127.0.0.1 localhost minikube stopped-upgrade-462000]
	I0408 04:43:16.968000    9805 provision.go:177] copyRemoteCerts
	I0408 04:43:16.968030    9805 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 04:43:16.968041    9805 sshutil.go:53] new ssh client: &{IP:localhost Port:51414 SSHKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/stopped-upgrade-462000/id_rsa Username:docker}
	I0408 04:43:17.005543    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0408 04:43:17.012653    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0408 04:43:17.019468    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 04:43:17.025916    9805 provision.go:87] duration metric: took 124.859875ms to configureAuth
	I0408 04:43:17.025926    9805 buildroot.go:189] setting minikube options for container-runtime
	I0408 04:43:17.026025    9805 config.go:182] Loaded profile config "stopped-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 04:43:17.026066    9805 main.go:141] libmachine: Using SSH client type: native
	I0408 04:43:17.026190    9805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102701c80] 0x1027044e0 <nil>  [] 0s} localhost 51414 <nil> <nil>}
	I0408 04:43:17.026197    9805 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0408 04:43:17.099186    9805 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0408 04:43:17.099195    9805 buildroot.go:70] root file system type: tmpfs
	I0408 04:43:17.099247    9805 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0408 04:43:17.099294    9805 main.go:141] libmachine: Using SSH client type: native
	I0408 04:43:17.099404    9805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102701c80] 0x1027044e0 <nil>  [] 0s} localhost 51414 <nil> <nil>}
	I0408 04:43:17.099437    9805 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0408 04:43:17.171706    9805 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0408 04:43:17.171764    9805 main.go:141] libmachine: Using SSH client type: native
	I0408 04:43:17.171875    9805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102701c80] 0x1027044e0 <nil>  [] 0s} localhost 51414 <nil> <nil>}
	I0408 04:43:17.171883    9805 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0408 04:43:17.561062    9805 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0408 04:43:17.561076    9805 machine.go:97] duration metric: took 911.401041ms to provisionDockerMachine
	I0408 04:43:17.561083    9805 start.go:293] postStartSetup for "stopped-upgrade-462000" (driver="qemu2")
	I0408 04:43:17.561090    9805 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 04:43:17.561146    9805 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 04:43:17.561156    9805 sshutil.go:53] new ssh client: &{IP:localhost Port:51414 SSHKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/stopped-upgrade-462000/id_rsa Username:docker}
	I0408 04:43:17.598415    9805 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 04:43:17.599687    9805 info.go:137] Remote host: Buildroot 2021.02.12
	I0408 04:43:17.599696    9805 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18588-7343/.minikube/addons for local assets ...
	I0408 04:43:17.599768    9805 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18588-7343/.minikube/files for local assets ...
	I0408 04:43:17.599853    9805 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18588-7343/.minikube/files/etc/ssl/certs/77492.pem -> 77492.pem in /etc/ssl/certs
	I0408 04:43:17.599941    9805 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 04:43:17.602289    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/files/etc/ssl/certs/77492.pem --> /etc/ssl/certs/77492.pem (1708 bytes)
	I0408 04:43:17.609401    9805 start.go:296] duration metric: took 48.313334ms for postStartSetup
	I0408 04:43:17.609414    9805 fix.go:56] duration metric: took 21.203109584s for fixHost
	I0408 04:43:17.609445    9805 main.go:141] libmachine: Using SSH client type: native
	I0408 04:43:17.609547    9805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102701c80] 0x1027044e0 <nil>  [] 0s} localhost 51414 <nil> <nil>}
	I0408 04:43:17.609551    9805 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 04:43:17.679764    9805 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712576598.176383754
	
	I0408 04:43:17.679776    9805 fix.go:216] guest clock: 1712576598.176383754
	I0408 04:43:17.679781    9805 fix.go:229] Guest: 2024-04-08 04:43:18.176383754 -0700 PDT Remote: 2024-04-08 04:43:17.609415 -0700 PDT m=+21.323518418 (delta=566.968754ms)
	I0408 04:43:17.679795    9805 fix.go:200] guest clock delta is within tolerance: 566.968754ms
	I0408 04:43:17.679802    9805 start.go:83] releasing machines lock for "stopped-upgrade-462000", held for 21.273506416s
	I0408 04:43:17.679880    9805 ssh_runner.go:195] Run: cat /version.json
	I0408 04:43:17.679890    9805 sshutil.go:53] new ssh client: &{IP:localhost Port:51414 SSHKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/stopped-upgrade-462000/id_rsa Username:docker}
	I0408 04:43:17.679898    9805 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 04:43:17.679925    9805 sshutil.go:53] new ssh client: &{IP:localhost Port:51414 SSHKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/stopped-upgrade-462000/id_rsa Username:docker}
	W0408 04:43:17.680562    9805 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51526->127.0.0.1:51414: write: broken pipe
	I0408 04:43:17.680584    9805 retry.go:31] will retry after 303.712287ms: ssh: handshake failed: write tcp 127.0.0.1:51526->127.0.0.1:51414: write: broken pipe
	W0408 04:43:18.039702    9805 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0408 04:43:18.039906    9805 ssh_runner.go:195] Run: systemctl --version
	I0408 04:43:18.043748    9805 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 04:43:18.047293    9805 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 04:43:18.047354    9805 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0408 04:43:18.053220    9805 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0408 04:43:18.062030    9805 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 04:43:18.062050    9805 start.go:494] detecting cgroup driver to use...
	I0408 04:43:18.062189    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 04:43:18.072825    9805 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0408 04:43:18.077050    9805 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 04:43:18.081264    9805 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 04:43:18.081297    9805 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 04:43:18.085220    9805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 04:43:18.089002    9805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 04:43:18.092320    9805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 04:43:18.095277    9805 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 04:43:18.098085    9805 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 04:43:18.101083    9805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 04:43:18.103819    9805 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 04:43:18.106503    9805 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 04:43:18.109435    9805 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 04:43:18.112433    9805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 04:43:18.193970    9805 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 04:43:18.200589    9805 start.go:494] detecting cgroup driver to use...
	I0408 04:43:18.200646    9805 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0408 04:43:18.207224    9805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 04:43:18.211708    9805 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 04:43:18.217801    9805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 04:43:18.223136    9805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 04:43:18.227677    9805 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0408 04:43:18.269267    9805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 04:43:18.274150    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 04:43:18.279468    9805 ssh_runner.go:195] Run: which cri-dockerd
	I0408 04:43:18.280613    9805 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0408 04:43:18.283076    9805 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0408 04:43:18.287902    9805 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0408 04:43:18.356984    9805 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0408 04:43:18.423769    9805 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0408 04:43:18.423836    9805 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0408 04:43:18.428922    9805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 04:43:18.505126    9805 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 04:43:19.646178    9805 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.141051584s)
	I0408 04:43:19.646243    9805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0408 04:43:19.651360    9805 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0408 04:43:19.657795    9805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 04:43:19.662621    9805 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0408 04:43:19.743836    9805 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0408 04:43:19.819257    9805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 04:43:19.896306    9805 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0408 04:43:19.901601    9805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 04:43:19.906536    9805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 04:43:19.989012    9805 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0408 04:43:20.027300    9805 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0408 04:43:20.027393    9805 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0408 04:43:20.029652    9805 start.go:562] Will wait 60s for crictl version
	I0408 04:43:20.029701    9805 ssh_runner.go:195] Run: which crictl
	I0408 04:43:20.031047    9805 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 04:43:20.046486    9805 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0408 04:43:20.046555    9805 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 04:43:20.064351    9805 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 04:43:20.084419    9805 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0408 04:43:20.084531    9805 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0408 04:43:20.085847    9805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 04:43:20.089863    9805 kubeadm.go:877] updating cluster {Name:stopped-upgrade-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51448 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0408 04:43:20.089908    9805 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0408 04:43:20.089947    9805 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0408 04:43:20.100663    9805 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0408 04:43:20.100673    9805 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0408 04:43:20.100726    9805 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0408 04:43:20.104014    9805 ssh_runner.go:195] Run: which lz4
	I0408 04:43:20.105310    9805 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0408 04:43:20.106482    9805 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 04:43:20.106492    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0408 04:43:20.784999    9805 docker.go:649] duration metric: took 679.727084ms to copy over tarball
	I0408 04:43:20.785071    9805 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 04:43:21.956355    9805 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.171285208s)
	I0408 04:43:21.956368    9805 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 04:43:21.971920    9805 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0408 04:43:21.974808    9805 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0408 04:43:21.979819    9805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 04:43:22.061506    9805 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 04:43:23.707525    9805 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.646024292s)
	I0408 04:43:23.707614    9805 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0408 04:43:23.723814    9805 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0408 04:43:23.723826    9805 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0408 04:43:23.723832    9805 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 04:43:23.729864    9805 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 04:43:23.729889    9805 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0408 04:43:23.729976    9805 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0408 04:43:23.730334    9805 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0408 04:43:23.730482    9805 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 04:43:23.730516    9805 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0408 04:43:23.730632    9805 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 04:43:23.731033    9805 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0408 04:43:23.740829    9805 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0408 04:43:23.740881    9805 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 04:43:23.740901    9805 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0408 04:43:23.740974    9805 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0408 04:43:23.741023    9805 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0408 04:43:23.741043    9805 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 04:43:23.741554    9805 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0408 04:43:23.741552    9805 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 04:43:24.160107    9805 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0408 04:43:24.171153    9805 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0408 04:43:24.171176    9805 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0408 04:43:24.171231    9805 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0408 04:43:24.181285    9805 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0408 04:43:24.181393    9805 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0408 04:43:24.183101    9805 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0408 04:43:24.183113    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0408 04:43:24.191743    9805 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0408 04:43:24.191752    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0408 04:43:24.191790    9805 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0408 04:43:24.191900    9805 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0408 04:43:24.209354    9805 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0408 04:43:24.216905    9805 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0408 04:43:24.228045    9805 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0408 04:43:24.228075    9805 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0408 04:43:24.228086    9805 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0408 04:43:24.228093    9805 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 04:43:24.228097    9805 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0408 04:43:24.228152    9805 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0408 04:43:24.228152    9805 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0408 04:43:24.236827    9805 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0408 04:43:24.236846    9805 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0408 04:43:24.236901    9805 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0408 04:43:24.246627    9805 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0408 04:43:24.265773    9805 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0408 04:43:24.265908    9805 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0408 04:43:24.266005    9805 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0408 04:43:24.267018    9805 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0408 04:43:24.267062    9805 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0408 04:43:24.267075    9805 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0408 04:43:24.267111    9805 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0408 04:43:24.268071    9805 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0408 04:43:24.268082    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0408 04:43:24.280515    9805 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0408 04:43:24.286280    9805 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0408 04:43:24.295699    9805 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 04:43:24.309670    9805 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0408 04:43:24.309683    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0408 04:43:24.312145    9805 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0408 04:43:24.312164    9805 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0408 04:43:24.312223    9805 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0408 04:43:24.325262    9805 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0408 04:43:24.325285    9805 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 04:43:24.325343    9805 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 04:43:24.368821    9805 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0408 04:43:24.368845    9805 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0408 04:43:24.368866    9805 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0408 04:43:24.368946    9805 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0408 04:43:24.370393    9805 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0408 04:43:24.370404    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0408 04:43:24.529041    9805 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0408 04:43:24.529055    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0408 04:43:24.592706    9805 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0408 04:43:24.592821    9805 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 04:43:24.672355    9805 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0408 04:43:24.672379    9805 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0408 04:43:24.672400    9805 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 04:43:24.672469    9805 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 04:43:24.686489    9805 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0408 04:43:24.686590    9805 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0408 04:43:24.688067    9805 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0408 04:43:24.688078    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0408 04:43:24.712292    9805 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0408 04:43:24.712308    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0408 04:43:24.944648    9805 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0408 04:43:24.944687    9805 cache_images.go:92] duration metric: took 1.220864791s to LoadCachedImages
	W0408 04:43:24.944730    9805 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0408 04:43:24.944735    9805 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0408 04:43:24.944803    9805 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-462000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 04:43:24.944867    9805 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0408 04:43:24.958115    9805 cni.go:84] Creating CNI manager for ""
	I0408 04:43:24.958127    9805 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:43:24.958132    9805 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 04:43:24.958140    9805 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-462000 NodeName:stopped-upgrade-462000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 04:43:24.958204    9805 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-462000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 04:43:24.958256    9805 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0408 04:43:24.961601    9805 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 04:43:24.961631    9805 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 04:43:24.964754    9805 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0408 04:43:24.969946    9805 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 04:43:24.974956    9805 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0408 04:43:24.980200    9805 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0408 04:43:24.981443    9805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 04:43:24.985096    9805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 04:43:25.048630    9805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 04:43:25.054677    9805 certs.go:68] Setting up /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000 for IP: 10.0.2.15
	I0408 04:43:25.054685    9805 certs.go:194] generating shared ca certs ...
	I0408 04:43:25.054694    9805 certs.go:226] acquiring lock for ca certs: {Name:mkf571f644c202bb973f8b5774e57a066bda7c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:43:25.054849    9805 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/ca.key
	I0408 04:43:25.054896    9805 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/proxy-client-ca.key
	I0408 04:43:25.054901    9805 certs.go:256] generating profile certs ...
	I0408 04:43:25.054973    9805 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/client.key
	I0408 04:43:25.054992    9805 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.key.e7e0aef3
	I0408 04:43:25.055002    9805 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.crt.e7e0aef3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0408 04:43:25.195336    9805 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.crt.e7e0aef3 ...
	I0408 04:43:25.195352    9805 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.crt.e7e0aef3: {Name:mkeaa4f5964f1e35c4e71960ef905304f13cde2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:43:25.195669    9805 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.key.e7e0aef3 ...
	I0408 04:43:25.195674    9805 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.key.e7e0aef3: {Name:mk86a190f057fbd339413ab3ccc5a7ca36f4036e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:43:25.195825    9805 certs.go:381] copying /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.crt.e7e0aef3 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.crt
	I0408 04:43:25.195960    9805 certs.go:385] copying /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.key.e7e0aef3 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.key
	I0408 04:43:25.196110    9805 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/proxy-client.key
	I0408 04:43:25.196249    9805 certs.go:484] found cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/7749.pem (1338 bytes)
	W0408 04:43:25.196280    9805 certs.go:480] ignoring /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/7749_empty.pem, impossibly tiny 0 bytes
	I0408 04:43:25.196285    9805 certs.go:484] found cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca-key.pem (1675 bytes)
	I0408 04:43:25.196310    9805 certs.go:484] found cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem (1078 bytes)
	I0408 04:43:25.196336    9805 certs.go:484] found cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem (1123 bytes)
	I0408 04:43:25.196362    9805 certs.go:484] found cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/key.pem (1679 bytes)
	I0408 04:43:25.196412    9805 certs.go:484] found cert: /Users/jenkins/minikube-integration/18588-7343/.minikube/files/etc/ssl/certs/77492.pem (1708 bytes)
	I0408 04:43:25.196752    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 04:43:25.204060    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0408 04:43:25.210938    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 04:43:25.217756    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 04:43:25.225141    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0408 04:43:25.232512    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 04:43:25.239113    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 04:43:25.245844    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 04:43:25.253028    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/files/etc/ssl/certs/77492.pem --> /usr/share/ca-certificates/77492.pem (1708 bytes)
	I0408 04:43:25.259891    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 04:43:25.266395    9805 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/7749.pem --> /usr/share/ca-certificates/7749.pem (1338 bytes)
	I0408 04:43:25.273459    9805 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 04:43:25.278624    9805 ssh_runner.go:195] Run: openssl version
	I0408 04:43:25.280449    9805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 04:43:25.283361    9805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 04:43:25.284850    9805 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:39 /usr/share/ca-certificates/minikubeCA.pem
	I0408 04:43:25.284882    9805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 04:43:25.286723    9805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 04:43:25.289975    9805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7749.pem && ln -fs /usr/share/ca-certificates/7749.pem /etc/ssl/certs/7749.pem"
	I0408 04:43:25.293440    9805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7749.pem
	I0408 04:43:25.294986    9805 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:27 /usr/share/ca-certificates/7749.pem
	I0408 04:43:25.295005    9805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7749.pem
	I0408 04:43:25.296798    9805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7749.pem /etc/ssl/certs/51391683.0"
	I0408 04:43:25.299575    9805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77492.pem && ln -fs /usr/share/ca-certificates/77492.pem /etc/ssl/certs/77492.pem"
	I0408 04:43:25.302424    9805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77492.pem
	I0408 04:43:25.303833    9805 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:27 /usr/share/ca-certificates/77492.pem
	I0408 04:43:25.303855    9805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77492.pem
	I0408 04:43:25.305517    9805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77492.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 04:43:25.308822    9805 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 04:43:25.310409    9805 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 04:43:25.312749    9805 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 04:43:25.314650    9805 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 04:43:25.316646    9805 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 04:43:25.318496    9805 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 04:43:25.320341    9805 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 04:43:25.322181    9805 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51448 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0408 04:43:25.322258    9805 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0408 04:43:25.332976    9805 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 04:43:25.336086    9805 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 04:43:25.336093    9805 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 04:43:25.336095    9805 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 04:43:25.336123    9805 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 04:43:25.338813    9805 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 04:43:25.339106    9805 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-462000" does not appear in /Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:43:25.339212    9805 kubeconfig.go:62] /Users/jenkins/minikube-integration/18588-7343/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-462000" cluster setting kubeconfig missing "stopped-upgrade-462000" context setting]
	I0408 04:43:25.339402    9805 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/kubeconfig: {Name:mk04d6060f19666b377da34a3aa7f8b9bcbb5054 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:43:25.339855    9805 kapi.go:59] client config for stopped-upgrade-462000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/client.key", CAFile:"/Users/jenkins/minikube-integration/18588-7343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1039f7940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0408 04:43:25.340179    9805 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 04:43:25.342879    9805 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-462000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0408 04:43:25.342896    9805 kubeadm.go:1154] stopping kube-system containers ...
	I0408 04:43:25.342938    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0408 04:43:25.353623    9805 docker.go:483] Stopping containers: [00e1dd75f73b 39adc787a95e 3c09b8b966ff f69b3e2174f4 4275b5aac9cf 7acaa22acfc7 479bf6d02b41 c7faa8c96454]
	I0408 04:43:25.353685    9805 ssh_runner.go:195] Run: docker stop 00e1dd75f73b 39adc787a95e 3c09b8b966ff f69b3e2174f4 4275b5aac9cf 7acaa22acfc7 479bf6d02b41 c7faa8c96454
	I0408 04:43:25.364461    9805 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 04:43:25.369701    9805 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 04:43:25.372777    9805 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 04:43:25.372783    9805 kubeadm.go:156] found existing configuration files:
	
	I0408 04:43:25.372803    9805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/admin.conf
	I0408 04:43:25.375309    9805 kubeadm.go:162] "https://control-plane.minikube.internal:51448" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 04:43:25.375333    9805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 04:43:25.377920    9805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/kubelet.conf
	I0408 04:43:25.380808    9805 kubeadm.go:162] "https://control-plane.minikube.internal:51448" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 04:43:25.380828    9805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 04:43:25.383450    9805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/controller-manager.conf
	I0408 04:43:25.385913    9805 kubeadm.go:162] "https://control-plane.minikube.internal:51448" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 04:43:25.385935    9805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 04:43:25.389020    9805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/scheduler.conf
	I0408 04:43:25.391497    9805 kubeadm.go:162] "https://control-plane.minikube.internal:51448" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 04:43:25.391513    9805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 04:43:25.394038    9805 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 04:43:25.397056    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 04:43:25.419064    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 04:43:26.093953    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 04:43:26.227548    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 04:43:26.246954    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 04:43:26.267443    9805 api_server.go:52] waiting for apiserver process to appear ...
	I0408 04:43:26.267520    9805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 04:43:26.769586    9805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 04:43:27.268812    9805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 04:43:27.275222    9805 api_server.go:72] duration metric: took 1.007793459s to wait for apiserver process to appear ...
	I0408 04:43:27.275234    9805 api_server.go:88] waiting for apiserver healthz status ...
	I0408 04:43:27.275243    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:43:32.277268    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:43:32.277332    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:43:37.277495    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:43:37.277544    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:43:42.278144    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:43:42.278183    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:43:47.278653    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:43:47.278693    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:43:52.279436    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:43:52.279573    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:43:57.280813    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:43:57.280854    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:02.282106    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:02.282135    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:07.283693    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:07.283769    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:12.284148    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:12.284184    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:17.286325    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:17.286393    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:22.288590    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:22.288611    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:27.290748    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:27.290935    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:44:27.302828    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:44:27.302908    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:44:27.313644    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:44:27.313714    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:44:27.323982    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:44:27.324049    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:44:27.339210    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:44:27.339287    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:44:27.349447    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:44:27.349585    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:44:27.360127    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:44:27.360209    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:44:27.370271    9805 logs.go:276] 0 containers: []
	W0408 04:44:27.370284    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:44:27.370355    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:44:27.380896    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:44:27.380913    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:44:27.380918    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:44:27.403591    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:44:27.403602    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:44:27.417499    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:44:27.417511    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:44:27.429184    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:44:27.429196    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:44:27.441020    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:44:27.441035    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:44:27.458548    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:44:27.458559    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:44:27.484941    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:44:27.484954    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:44:27.498058    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:44:27.498069    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:44:27.536554    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:44:27.536565    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:44:27.652392    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:44:27.652405    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:44:27.667009    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:44:27.667022    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:44:27.678760    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:44:27.678777    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:44:27.694005    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:44:27.694022    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:44:27.709664    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:44:27.709675    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:44:27.721101    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:44:27.721112    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:44:27.725740    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:44:27.725748    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:44:27.753052    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:44:27.753064    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:44:30.271744    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:35.273870    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:35.274040    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:44:35.294944    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:44:35.295038    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:44:35.306130    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:44:35.306202    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:44:35.316512    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:44:35.316582    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:44:35.326970    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:44:35.327035    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:44:35.340425    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:44:35.340505    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:44:35.351246    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:44:35.351332    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:44:35.362645    9805 logs.go:276] 0 containers: []
	W0408 04:44:35.362656    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:44:35.362721    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:44:35.373966    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:44:35.374002    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:44:35.374009    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:44:35.385327    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:44:35.385339    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:44:35.405118    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:44:35.405129    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:44:35.419210    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:44:35.419221    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:44:35.457937    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:44:35.457946    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:44:35.472695    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:44:35.472706    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:44:35.493659    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:44:35.493672    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:44:35.521575    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:44:35.521583    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:44:35.547149    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:44:35.547161    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:44:35.560967    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:44:35.560981    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:44:35.575821    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:44:35.575833    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:44:35.590948    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:44:35.590959    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:44:35.604997    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:44:35.605009    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:44:35.618972    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:44:35.618988    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:44:35.630799    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:44:35.630817    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:44:35.642499    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:44:35.642510    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:44:35.646940    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:44:35.646949    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:44:38.186620    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:43.188862    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:43.189074    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:44:43.205432    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:44:43.205538    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:44:43.218483    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:44:43.218580    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:44:43.229910    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:44:43.229985    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:44:43.240787    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:44:43.240864    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:44:43.251331    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:44:43.251402    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:44:43.262078    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:44:43.262150    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:44:43.272360    9805 logs.go:276] 0 containers: []
	W0408 04:44:43.272370    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:44:43.272440    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:44:43.282833    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:44:43.282863    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:44:43.282869    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:44:43.286934    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:44:43.286943    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:44:43.300493    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:44:43.300505    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:44:43.314735    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:44:43.314765    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:44:43.326807    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:44:43.326817    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:44:43.343894    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:44:43.343904    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:44:43.354831    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:44:43.354840    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:44:43.366072    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:44:43.366081    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:44:43.377544    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:44:43.377555    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:44:43.392533    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:44:43.392544    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:44:43.420521    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:44:43.420534    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:44:43.433039    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:44:43.433053    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:44:43.471857    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:44:43.471869    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:44:43.508905    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:44:43.508919    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:44:43.533684    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:44:43.533699    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:44:43.548031    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:44:43.548042    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:44:43.558978    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:44:43.558988    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:44:46.080599    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:51.082723    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:51.082885    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:44:51.094564    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:44:51.094644    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:44:51.105761    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:44:51.105830    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:44:51.119160    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:44:51.119231    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:44:51.129401    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:44:51.129487    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:44:51.139857    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:44:51.139930    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:44:51.159090    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:44:51.159160    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:44:51.172470    9805 logs.go:276] 0 containers: []
	W0408 04:44:51.172484    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:44:51.172547    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:44:51.182878    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:44:51.182897    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:44:51.182902    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:44:51.219548    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:44:51.219560    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:44:51.223580    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:44:51.223595    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:44:51.247999    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:44:51.248010    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:44:51.265928    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:44:51.265941    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:44:51.277372    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:44:51.277388    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:44:51.315827    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:44:51.315842    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:44:51.329723    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:44:51.329733    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:44:51.343390    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:44:51.343403    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:44:51.357399    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:44:51.357410    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:44:51.369275    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:44:51.369286    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:44:51.383744    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:44:51.383759    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:44:51.400925    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:44:51.400937    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:44:51.425026    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:44:51.425034    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:44:51.440554    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:44:51.440564    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:44:51.451397    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:44:51.451407    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:44:51.462264    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:44:51.462274    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:44:53.976173    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:44:58.978715    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:44:58.979035    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:44:59.004879    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:44:59.004990    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:44:59.023550    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:44:59.023640    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:44:59.042917    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:44:59.042997    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:44:59.054594    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:44:59.054686    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:44:59.065093    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:44:59.065159    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:44:59.075620    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:44:59.075696    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:44:59.092268    9805 logs.go:276] 0 containers: []
	W0408 04:44:59.092279    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:44:59.092341    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:44:59.103016    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:44:59.103034    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:44:59.103040    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:44:59.114640    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:44:59.114664    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:44:59.151431    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:44:59.151443    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:44:59.166993    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:44:59.167007    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:44:59.179288    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:44:59.179302    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:44:59.190640    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:44:59.190652    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:44:59.213947    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:44:59.213959    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:44:59.251729    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:44:59.251743    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:44:59.267958    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:44:59.267970    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:44:59.282421    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:44:59.282435    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:44:59.296486    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:44:59.296501    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:44:59.309256    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:44:59.309267    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:44:59.313846    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:44:59.313853    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:44:59.340367    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:44:59.340382    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:44:59.357995    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:44:59.358010    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:44:59.370048    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:44:59.370058    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:44:59.384425    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:44:59.384436    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:45:01.901566    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:06.904099    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:06.904331    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:45:06.923957    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:45:06.924057    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:45:06.937628    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:45:06.937708    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:45:06.950087    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:45:06.950160    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:45:06.961199    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:45:06.961283    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:45:06.971864    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:45:06.971930    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:45:06.982166    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:45:06.982233    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:45:06.992348    9805 logs.go:276] 0 containers: []
	W0408 04:45:06.992358    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:45:06.992414    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:45:07.002664    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:45:07.002681    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:45:07.002688    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:45:07.040078    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:45:07.040091    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:45:07.054710    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:45:07.054723    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:45:07.068151    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:45:07.068164    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:45:07.079813    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:45:07.079825    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:45:07.084212    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:45:07.084220    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:45:07.109439    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:45:07.109450    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:45:07.132011    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:45:07.132021    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:45:07.148595    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:45:07.148605    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:45:07.160929    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:45:07.160940    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:45:07.174446    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:45:07.174457    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:45:07.186122    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:45:07.186136    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:45:07.220971    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:45:07.220982    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:45:07.232776    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:45:07.232788    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:45:07.245497    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:45:07.245508    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:45:07.261089    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:45:07.261098    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:45:07.272631    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:45:07.272642    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:45:09.798852    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:14.801006    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:14.801216    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:45:14.818900    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:45:14.819008    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:45:14.832862    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:45:14.832936    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:45:14.844753    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:45:14.844818    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:45:14.855412    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:45:14.855482    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:45:14.866255    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:45:14.866327    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:45:14.877305    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:45:14.877374    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:45:14.888066    9805 logs.go:276] 0 containers: []
	W0408 04:45:14.888077    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:45:14.888137    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:45:14.899757    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:45:14.899786    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:45:14.899792    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:45:14.910875    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:45:14.910888    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:45:14.925443    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:45:14.925457    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:45:14.936880    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:45:14.936894    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:45:14.948838    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:45:14.948848    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:45:14.953570    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:45:14.953579    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:45:14.970606    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:45:14.970617    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:45:14.981621    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:45:14.981633    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:45:15.005456    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:45:15.005468    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:45:15.041976    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:45:15.041989    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:45:15.056010    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:45:15.056019    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:45:15.083396    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:45:15.083407    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:45:15.107352    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:45:15.107364    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:45:15.123529    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:45:15.123540    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:45:15.136060    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:45:15.136071    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:45:15.173237    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:45:15.173247    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:45:15.186781    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:45:15.186792    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:45:17.702722    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:22.704954    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:22.705114    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:45:22.720031    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:45:22.720109    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:45:22.731175    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:45:22.731249    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:45:22.743785    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:45:22.743852    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:45:22.757998    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:45:22.758070    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:45:22.768307    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:45:22.768385    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:45:22.778660    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:45:22.778726    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:45:22.789152    9805 logs.go:276] 0 containers: []
	W0408 04:45:22.789163    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:45:22.789217    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:45:22.802903    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:45:22.802919    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:45:22.802928    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:45:22.814552    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:45:22.814564    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:45:22.827018    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:45:22.827030    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:45:22.841210    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:45:22.841221    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:45:22.852536    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:45:22.852548    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:45:22.867647    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:45:22.867660    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:45:22.886038    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:45:22.886048    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:45:22.897902    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:45:22.897912    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:45:22.914863    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:45:22.914877    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:45:22.939175    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:45:22.939183    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:45:22.978220    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:45:22.978228    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:45:23.015393    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:45:23.015405    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:45:23.028336    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:45:23.028346    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:45:23.040045    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:45:23.040056    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:45:23.044731    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:45:23.044739    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:45:23.059183    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:45:23.059193    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:45:23.084907    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:45:23.084918    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:45:25.604705    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:30.606823    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:30.606991    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:45:30.630570    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:45:30.630648    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:45:30.649892    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:45:30.649968    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:45:30.660821    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:45:30.660893    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:45:30.671839    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:45:30.671916    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:45:30.682466    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:45:30.682541    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:45:30.693142    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:45:30.693218    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:45:30.703488    9805 logs.go:276] 0 containers: []
	W0408 04:45:30.703499    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:45:30.703560    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:45:30.714208    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:45:30.714226    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:45:30.714232    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:45:30.731636    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:45:30.731648    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:45:30.745013    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:45:30.745025    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:45:30.758769    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:45:30.758779    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:45:30.772234    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:45:30.772245    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:45:30.784136    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:45:30.784146    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:45:30.795004    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:45:30.795015    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:45:30.807290    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:45:30.807302    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:45:30.830702    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:45:30.830720    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:45:30.855215    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:45:30.855230    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:45:30.868217    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:45:30.868229    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:45:30.882842    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:45:30.882857    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:45:30.898258    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:45:30.898268    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:45:30.910165    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:45:30.910175    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:45:30.922060    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:45:30.922072    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:45:30.960719    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:45:30.960729    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:45:30.965221    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:45:30.965227    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:45:33.501206    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:38.503399    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:38.503618    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:45:38.530452    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:45:38.530581    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:45:38.549166    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:45:38.549248    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:45:38.562624    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:45:38.562707    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:45:38.574920    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:45:38.574992    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:45:38.585719    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:45:38.585790    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:45:38.596869    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:45:38.596937    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:45:38.610629    9805 logs.go:276] 0 containers: []
	W0408 04:45:38.610639    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:45:38.610691    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:45:38.624382    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:45:38.624397    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:45:38.624402    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:45:38.663699    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:45:38.663708    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:45:38.698522    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:45:38.698537    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:45:38.712784    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:45:38.712796    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:45:38.724319    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:45:38.724330    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:45:38.749025    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:45:38.749036    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:45:38.760882    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:45:38.760896    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:45:38.774454    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:45:38.774466    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:45:38.794565    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:45:38.794578    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:45:38.798736    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:45:38.798744    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:45:38.812771    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:45:38.812781    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:45:38.837658    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:45:38.837668    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:45:38.855567    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:45:38.855580    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:45:38.875938    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:45:38.875950    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:45:38.888369    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:45:38.888380    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:45:38.903390    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:45:38.903399    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:45:38.926775    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:45:38.926786    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:45:41.440198    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:46.440718    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:46.440872    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:45:46.452605    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:45:46.452686    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:45:46.466175    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:45:46.466251    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:45:46.478186    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:45:46.478257    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:45:46.493490    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:45:46.493564    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:45:46.503851    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:45:46.503926    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:45:46.514453    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:45:46.514526    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:45:46.524995    9805 logs.go:276] 0 containers: []
	W0408 04:45:46.525005    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:45:46.525061    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:45:46.536459    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:45:46.536508    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:45:46.536515    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:45:46.550258    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:45:46.550271    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:45:46.561184    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:45:46.561194    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:45:46.572883    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:45:46.572893    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:45:46.596204    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:45:46.596217    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:45:46.609690    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:45:46.609699    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:45:46.644266    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:45:46.644278    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:45:46.656151    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:45:46.656161    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:45:46.667957    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:45:46.667967    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:45:46.692299    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:45:46.692307    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:45:46.696356    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:45:46.696365    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:45:46.710426    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:45:46.710435    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:45:46.734665    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:45:46.734678    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:45:46.748907    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:45:46.748919    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:45:46.766044    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:45:46.766056    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:45:46.778106    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:45:46.778118    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:45:46.814927    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:45:46.814938    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:45:49.328278    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:45:54.330390    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:45:54.330514    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:45:54.345626    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:45:54.345718    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:45:54.357292    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:45:54.357365    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:45:54.369043    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:45:54.369115    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:45:54.379249    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:45:54.379315    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:45:54.389498    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:45:54.389559    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:45:54.399868    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:45:54.399935    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:45:54.412582    9805 logs.go:276] 0 containers: []
	W0408 04:45:54.412596    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:45:54.412650    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:45:54.422798    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:45:54.422816    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:45:54.422821    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:45:54.436684    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:45:54.436695    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:45:54.461679    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:45:54.461688    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:45:54.473424    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:45:54.473437    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:45:54.487106    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:45:54.487116    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:45:54.498866    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:45:54.498877    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:45:54.513131    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:45:54.513142    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:45:54.525331    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:45:54.525343    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:45:54.542546    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:45:54.542557    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:45:54.554234    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:45:54.554245    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:45:54.593219    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:45:54.593231    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:45:54.597678    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:45:54.597686    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:45:54.611089    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:45:54.611099    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:45:54.625411    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:45:54.625425    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:45:54.648003    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:45:54.648011    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:45:54.660197    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:45:54.660211    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:45:54.696468    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:45:54.696482    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:45:57.210678    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:02.210891    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:02.211075    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:02.225917    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:46:02.226006    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:02.238187    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:46:02.238266    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:02.248847    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:46:02.248936    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:02.259637    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:46:02.259717    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:02.270048    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:46:02.270123    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:02.280607    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:46:02.280675    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:02.290704    9805 logs.go:276] 0 containers: []
	W0408 04:46:02.290714    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:02.290771    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:02.301486    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:46:02.301504    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:02.301511    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:02.343963    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:46:02.343974    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:46:02.365920    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:46:02.365931    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:46:02.379896    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:46:02.379909    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:46:02.391650    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:46:02.391667    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:46:02.405272    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:46:02.405285    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:46:02.416705    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:46:02.416716    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:02.428574    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:02.428589    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:46:02.465511    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:46:02.465519    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:46:02.477171    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:02.477183    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:02.500607    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:46:02.500615    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:46:02.525042    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:46:02.525053    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:46:02.536703    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:46:02.536715    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:46:02.551462    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:46:02.551472    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:46:02.568592    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:02.568603    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:02.572795    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:46:02.572803    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:46:02.584053    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:46:02.584064    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:46:05.106088    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:10.108396    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:10.108517    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:10.121142    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:46:10.121211    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:10.131458    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:46:10.131516    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:10.142291    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:46:10.142364    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:10.152749    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:46:10.152821    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:10.162854    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:46:10.162911    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:10.173208    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:46:10.173276    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:10.187271    9805 logs.go:276] 0 containers: []
	W0408 04:46:10.187283    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:10.187340    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:10.198998    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:46:10.199016    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:46:10.199022    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:46:10.213307    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:46:10.213318    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:46:10.224385    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:46:10.224396    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:46:10.236508    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:46:10.236519    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:46:10.250919    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:46:10.250934    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:46:10.261875    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:10.261887    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:10.266202    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:10.266208    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:10.301635    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:46:10.301650    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:46:10.330361    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:46:10.330372    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:46:10.349039    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:46:10.349049    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:46:10.362301    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:10.362312    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:10.384744    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:46:10.384755    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:46:10.408391    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:46:10.408400    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:10.420449    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:10.420460    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:46:10.460519    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:46:10.460527    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:46:10.476838    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:46:10.476848    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:46:10.489137    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:46:10.489148    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:46:13.003050    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:18.005426    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:18.005863    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:18.047629    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:46:18.047755    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:18.069090    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:46:18.069191    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:18.084119    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:46:18.084194    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:18.103660    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:46:18.103730    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:18.114294    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:46:18.114359    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:18.124861    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:46:18.124928    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:18.135448    9805 logs.go:276] 0 containers: []
	W0408 04:46:18.135462    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:18.135518    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:18.145860    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:46:18.145881    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:46:18.145887    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:46:18.159600    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:18.159612    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:18.182435    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:46:18.182443    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:46:18.197471    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:46:18.197486    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:46:18.211741    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:46:18.211753    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:46:18.229365    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:46:18.229375    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:46:18.254291    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:46:18.254301    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:46:18.269212    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:46:18.269227    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:18.281189    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:18.281204    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:46:18.317857    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:18.317867    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:18.322385    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:46:18.322391    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:46:18.336347    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:46:18.336358    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:46:18.347787    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:18.347796    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:18.385165    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:46:18.385174    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:46:18.396957    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:46:18.396968    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:46:18.410473    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:46:18.410482    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:46:18.421439    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:46:18.421451    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:46:20.938193    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:25.940405    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:25.940603    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:25.958529    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:46:25.958617    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:25.971692    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:46:25.971773    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:25.983148    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:46:25.983221    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:25.993754    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:46:25.993833    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:26.003926    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:46:26.003997    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:26.017204    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:46:26.017277    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:26.027289    9805 logs.go:276] 0 containers: []
	W0408 04:46:26.027300    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:26.027357    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:26.038134    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:46:26.038151    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:46:26.038156    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:46:26.049845    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:46:26.049858    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:46:26.062022    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:46:26.062034    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:46:26.076944    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:46:26.076957    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:46:26.089163    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:46:26.089173    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:46:26.100594    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:26.100605    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:46:26.139192    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:26.139202    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:26.174146    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:46:26.174160    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:46:26.196355    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:26.196366    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:26.219913    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:46:26.219925    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:46:26.233639    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:46:26.233649    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:46:26.253376    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:46:26.253388    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:46:26.270485    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:46:26.270497    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:46:26.284488    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:26.284499    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:26.289042    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:46:26.289051    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:46:26.317055    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:46:26.317072    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:46:26.330880    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:46:26.330893    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:28.845436    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:33.846445    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:33.846918    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:33.885975    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:46:33.886116    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:33.908081    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:46:33.908188    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:33.923572    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:46:33.923651    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:33.936157    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:46:33.936226    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:33.948599    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:46:33.948671    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:33.959162    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:46:33.959223    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:33.969636    9805 logs.go:276] 0 containers: []
	W0408 04:46:33.969650    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:33.969707    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:33.987583    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:46:33.987603    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:33.987609    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:34.011429    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:46:34.011438    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:46:34.023712    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:46:34.023724    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:46:34.038428    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:46:34.038442    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:46:34.050639    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:46:34.050652    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:46:34.061603    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:34.061614    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:34.065522    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:46:34.065528    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:46:34.080139    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:46:34.080153    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:46:34.098095    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:46:34.098115    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:46:34.110987    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:46:34.111000    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:34.123980    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:34.123996    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:46:34.163929    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:46:34.163946    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:46:34.189545    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:46:34.189556    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:46:34.206714    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:46:34.206725    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:46:34.220014    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:34.220024    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:34.256824    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:46:34.256835    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:46:34.270465    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:46:34.270475    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:46:36.784485    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:41.786593    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:41.786666    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:41.798300    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:46:41.798376    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:41.810237    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:46:41.810309    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:41.820877    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:46:41.820954    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:41.832304    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:46:41.832379    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:41.851404    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:46:41.851481    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:41.862975    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:46:41.863048    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:41.874162    9805 logs.go:276] 0 containers: []
	W0408 04:46:41.874173    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:41.874233    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:41.885604    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:46:41.885623    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:46:41.885629    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:46:41.898180    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:46:41.898194    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:46:41.912332    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:46:41.912347    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:46:41.938468    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:46:41.938481    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:46:41.950945    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:46:41.950957    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:46:41.967601    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:46:41.967615    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:46:41.981378    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:46:41.981393    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:46:41.996064    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:41.996075    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:46:42.034635    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:42.034645    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:42.072603    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:46:42.072615    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:46:42.086849    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:46:42.086860    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:46:42.105405    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:46:42.105419    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:42.118080    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:42.118091    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:42.122434    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:46:42.122441    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:46:42.134534    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:42.134547    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:42.159023    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:46:42.159036    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:46:42.173357    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:46:42.173372    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:46:44.687043    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:49.689179    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:49.689361    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:49.700136    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:46:49.700216    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:49.714499    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:46:49.714574    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:49.724936    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:46:49.725006    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:49.736219    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:46:49.736292    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:49.747141    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:46:49.747215    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:49.757308    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:46:49.757374    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:49.767544    9805 logs.go:276] 0 containers: []
	W0408 04:46:49.767554    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:49.767612    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:49.778606    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:46:49.778624    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:49.778630    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:46:49.817371    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:49.817384    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:49.821573    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:46:49.821581    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:46:49.835423    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:46:49.835436    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:46:49.860144    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:46:49.860157    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:46:49.871806    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:46:49.871817    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:49.884708    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:49.884720    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:49.921183    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:46:49.921196    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:46:49.935665    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:46:49.935676    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:46:49.948213    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:46:49.948227    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:46:49.960642    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:46:49.960651    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:46:49.972147    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:46:49.972158    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:46:49.984361    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:46:49.984374    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:46:49.999509    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:46:49.999519    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:46:50.016142    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:46:50.016153    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:46:50.043496    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:46:50.043509    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:46:50.060835    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:50.060846    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:52.584741    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:46:57.586859    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:46:57.586998    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:46:57.598405    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:46:57.598492    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:46:57.609271    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:46:57.609351    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:46:57.619653    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:46:57.619726    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:46:57.633643    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:46:57.633712    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:46:57.644397    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:46:57.644470    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:46:57.654769    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:46:57.654836    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:46:57.665667    9805 logs.go:276] 0 containers: []
	W0408 04:46:57.665678    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:46:57.665744    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:46:57.676287    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:46:57.676308    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:46:57.676316    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:46:57.688502    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:46:57.688513    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:46:57.700300    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:46:57.700314    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:46:57.739533    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:46:57.739545    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:46:57.744415    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:46:57.744421    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:46:57.756093    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:46:57.756103    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:46:57.771037    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:46:57.771052    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:46:57.788479    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:46:57.788489    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:46:57.802173    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:46:57.802187    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:46:57.815044    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:46:57.815054    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:46:57.829399    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:46:57.829411    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:46:57.840467    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:46:57.840478    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:46:57.865662    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:46:57.865675    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:46:57.879509    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:46:57.879520    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:46:57.901695    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:46:57.901702    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:46:57.937679    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:46:57.937692    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:46:57.951707    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:46:57.951717    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:47:00.468839    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:05.471128    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:05.471475    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:47:05.501655    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:47:05.501788    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:47:05.519082    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:47:05.519169    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:47:05.532815    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:47:05.532884    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:47:05.544710    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:47:05.544789    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:47:05.555754    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:47:05.555824    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:47:05.566676    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:47:05.566749    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:47:05.577560    9805 logs.go:276] 0 containers: []
	W0408 04:47:05.577571    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:47:05.577633    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:47:05.588515    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:47:05.588532    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:47:05.588537    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:47:05.600519    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:47:05.600532    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:47:05.611867    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:47:05.611879    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:47:05.615976    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:47:05.615986    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:47:05.632283    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:47:05.632297    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:47:05.649693    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:47:05.649704    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:47:05.667377    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:47:05.667387    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:47:05.690862    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:47:05.690874    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:47:05.703462    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:47:05.703476    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:47:05.738980    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:47:05.738996    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:47:05.750765    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:47:05.750777    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:47:05.790351    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:47:05.790359    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:47:05.804506    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:47:05.804519    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:47:05.817972    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:47:05.817982    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:47:05.829233    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:47:05.829243    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:47:05.844619    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:47:05.844630    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:47:05.865494    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:47:05.865504    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:47:08.392520    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:13.394966    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:13.395169    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:47:13.411422    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:47:13.411510    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:47:13.423752    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:47:13.423841    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:47:13.437688    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:47:13.437760    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:47:13.448154    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:47:13.448222    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:47:13.458518    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:47:13.458594    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:47:13.470293    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:47:13.470366    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:47:13.480361    9805 logs.go:276] 0 containers: []
	W0408 04:47:13.480374    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:47:13.480436    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:47:13.491156    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:47:13.491174    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:47:13.491179    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:47:13.504827    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:47:13.504839    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:47:13.518944    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:47:13.518957    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:47:13.523730    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:47:13.523739    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:47:13.538348    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:47:13.538358    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:47:13.549542    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:47:13.549552    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:47:13.566205    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:47:13.566215    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:47:13.579764    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:47:13.579775    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:47:13.594383    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:47:13.594395    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:47:13.608062    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:47:13.608076    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:47:13.643592    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:47:13.643604    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:47:13.668423    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:47:13.668432    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:47:13.682774    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:47:13.682788    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:47:13.697106    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:47:13.697120    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:47:13.720005    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:47:13.720014    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:47:13.758404    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:47:13.758415    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:47:13.772147    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:47:13.772158    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:47:16.306386    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:21.308643    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:21.308836    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:47:21.340318    9805 logs.go:276] 2 containers: [efe4f3fadf4a 00e1dd75f73b]
	I0408 04:47:21.340383    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:47:21.352002    9805 logs.go:276] 2 containers: [1153306a2afa 39adc787a95e]
	I0408 04:47:21.352077    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:47:21.362894    9805 logs.go:276] 1 containers: [fb0168c66b81]
	I0408 04:47:21.362967    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:47:21.373048    9805 logs.go:276] 2 containers: [5c4ebdc9afd2 3c09b8b966ff]
	I0408 04:47:21.373121    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:47:21.383943    9805 logs.go:276] 1 containers: [831215b06834]
	I0408 04:47:21.384012    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:47:21.398424    9805 logs.go:276] 2 containers: [6c512df4836a 4275b5aac9cf]
	I0408 04:47:21.398492    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:47:21.408565    9805 logs.go:276] 0 containers: []
	W0408 04:47:21.408577    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:47:21.408634    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:47:21.421536    9805 logs.go:276] 2 containers: [9adcbc7a9018 b3dc342b3ac5]
	I0408 04:47:21.421555    9805 logs.go:123] Gathering logs for kube-controller-manager [4275b5aac9cf] ...
	I0408 04:47:21.421561    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4275b5aac9cf"
	I0408 04:47:21.435098    9805 logs.go:123] Gathering logs for storage-provisioner [9adcbc7a9018] ...
	I0408 04:47:21.435111    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9adcbc7a9018"
	I0408 04:47:21.447165    9805 logs.go:123] Gathering logs for storage-provisioner [b3dc342b3ac5] ...
	I0408 04:47:21.447178    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3dc342b3ac5"
	I0408 04:47:21.458224    9805 logs.go:123] Gathering logs for kube-apiserver [efe4f3fadf4a] ...
	I0408 04:47:21.458235    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efe4f3fadf4a"
	I0408 04:47:21.472494    9805 logs.go:123] Gathering logs for kube-apiserver [00e1dd75f73b] ...
	I0408 04:47:21.472503    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00e1dd75f73b"
	I0408 04:47:21.497554    9805 logs.go:123] Gathering logs for etcd [1153306a2afa] ...
	I0408 04:47:21.497565    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1153306a2afa"
	I0408 04:47:21.512105    9805 logs.go:123] Gathering logs for kube-controller-manager [6c512df4836a] ...
	I0408 04:47:21.512116    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c512df4836a"
	I0408 04:47:21.529715    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:47:21.529725    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:47:21.565802    9805 logs.go:123] Gathering logs for etcd [39adc787a95e] ...
	I0408 04:47:21.565814    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39adc787a95e"
	I0408 04:47:21.585881    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:47:21.585894    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:47:21.598101    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:47:21.598116    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:47:21.602343    9805 logs.go:123] Gathering logs for kube-scheduler [5c4ebdc9afd2] ...
	I0408 04:47:21.602352    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4ebdc9afd2"
	I0408 04:47:21.613899    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:47:21.613910    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:47:21.635398    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:47:21.635409    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 04:47:21.671859    9805 logs.go:123] Gathering logs for coredns [fb0168c66b81] ...
	I0408 04:47:21.671867    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb0168c66b81"
	I0408 04:47:21.683044    9805 logs.go:123] Gathering logs for kube-scheduler [3c09b8b966ff] ...
	I0408 04:47:21.683056    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c09b8b966ff"
	I0408 04:47:21.698022    9805 logs.go:123] Gathering logs for kube-proxy [831215b06834] ...
	I0408 04:47:21.698034    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831215b06834"
	I0408 04:47:24.211437    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:29.213695    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:29.213749    9805 kubeadm.go:591] duration metric: took 4m3.881072542s to restartPrimaryControlPlane
	W0408 04:47:29.213790    9805 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 04:47:29.213812    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0408 04:47:30.244487    9805 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.030677125s)
	I0408 04:47:30.244552    9805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 04:47:30.249630    9805 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 04:47:30.252524    9805 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 04:47:30.255930    9805 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 04:47:30.255938    9805 kubeadm.go:156] found existing configuration files:
	
	I0408 04:47:30.255983    9805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/admin.conf
	I0408 04:47:30.258963    9805 kubeadm.go:162] "https://control-plane.minikube.internal:51448" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 04:47:30.259002    9805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 04:47:30.262973    9805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/kubelet.conf
	I0408 04:47:30.266379    9805 kubeadm.go:162] "https://control-plane.minikube.internal:51448" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 04:47:30.266407    9805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 04:47:30.269060    9805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/controller-manager.conf
	I0408 04:47:30.271582    9805 kubeadm.go:162] "https://control-plane.minikube.internal:51448" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 04:47:30.271602    9805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 04:47:30.274830    9805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/scheduler.conf
	I0408 04:47:30.279109    9805 kubeadm.go:162] "https://control-plane.minikube.internal:51448" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51448 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 04:47:30.279149    9805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 04:47:30.282309    9805 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 04:47:30.299871    9805 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0408 04:47:30.299962    9805 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 04:47:30.351371    9805 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 04:47:30.351425    9805 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 04:47:30.351480    9805 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 04:47:30.402035    9805 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 04:47:30.410246    9805 out.go:204]   - Generating certificates and keys ...
	I0408 04:47:30.410280    9805 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 04:47:30.410311    9805 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 04:47:30.410347    9805 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 04:47:30.410378    9805 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 04:47:30.410410    9805 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 04:47:30.410435    9805 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 04:47:30.410469    9805 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 04:47:30.410514    9805 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 04:47:30.410560    9805 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 04:47:30.410598    9805 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 04:47:30.410617    9805 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 04:47:30.410649    9805 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 04:47:30.472771    9805 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 04:47:30.568193    9805 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 04:47:30.609220    9805 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 04:47:30.675304    9805 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 04:47:30.704579    9805 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 04:47:30.704990    9805 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 04:47:30.705019    9805 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 04:47:30.772566    9805 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 04:47:30.776774    9805 out.go:204]   - Booting up control plane ...
	I0408 04:47:30.776829    9805 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 04:47:30.776916    9805 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 04:47:30.776964    9805 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 04:47:30.777013    9805 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 04:47:30.777101    9805 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 04:47:34.777108    9805 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.002789 seconds
	I0408 04:47:34.777188    9805 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 04:47:34.781652    9805 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 04:47:35.294223    9805 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 04:47:35.294447    9805 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-462000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 04:47:35.798722    9805 kubeadm.go:309] [bootstrap-token] Using token: yxvc6h.6bzi3s39gqqnpulm
	I0408 04:47:35.802619    9805 out.go:204]   - Configuring RBAC rules ...
	I0408 04:47:35.802694    9805 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 04:47:35.806390    9805 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 04:47:35.812010    9805 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 04:47:35.813007    9805 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 04:47:35.814090    9805 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 04:47:35.814997    9805 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 04:47:35.819653    9805 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 04:47:35.998602    9805 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 04:47:36.208521    9805 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 04:47:36.209183    9805 kubeadm.go:309] 
	I0408 04:47:36.209212    9805 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 04:47:36.209216    9805 kubeadm.go:309] 
	I0408 04:47:36.209257    9805 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 04:47:36.209263    9805 kubeadm.go:309] 
	I0408 04:47:36.209280    9805 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 04:47:36.209337    9805 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 04:47:36.209365    9805 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 04:47:36.209367    9805 kubeadm.go:309] 
	I0408 04:47:36.209410    9805 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 04:47:36.209417    9805 kubeadm.go:309] 
	I0408 04:47:36.209454    9805 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 04:47:36.209459    9805 kubeadm.go:309] 
	I0408 04:47:36.209498    9805 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 04:47:36.209550    9805 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 04:47:36.209588    9805 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 04:47:36.209596    9805 kubeadm.go:309] 
	I0408 04:47:36.209649    9805 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 04:47:36.209699    9805 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 04:47:36.209701    9805 kubeadm.go:309] 
	I0408 04:47:36.209766    9805 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token yxvc6h.6bzi3s39gqqnpulm \
	I0408 04:47:36.209830    9805 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:63c1082056c9546e83bc7e238ddca3361d3bc0d4a9173109edd9ba5d9e410231 \
	I0408 04:47:36.209843    9805 kubeadm.go:309] 	--control-plane 
	I0408 04:47:36.209847    9805 kubeadm.go:309] 
	I0408 04:47:36.209890    9805 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 04:47:36.209894    9805 kubeadm.go:309] 
	I0408 04:47:36.209941    9805 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token yxvc6h.6bzi3s39gqqnpulm \
	I0408 04:47:36.210000    9805 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:63c1082056c9546e83bc7e238ddca3361d3bc0d4a9173109edd9ba5d9e410231 
	I0408 04:47:36.210108    9805 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 04:47:36.210197    9805 cni.go:84] Creating CNI manager for ""
	I0408 04:47:36.210206    9805 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:47:36.214059    9805 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 04:47:36.221024    9805 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 04:47:36.223995    9805 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 04:47:36.230716    9805 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 04:47:36.230801    9805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-462000 minikube.k8s.io/updated_at=2024_04_08T04_47_36_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=stopped-upgrade-462000 minikube.k8s.io/primary=true
	I0408 04:47:36.230810    9805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 04:47:36.235793    9805 ops.go:34] apiserver oom_adj: -16
	I0408 04:47:36.262877    9805 kubeadm.go:1107] duration metric: took 32.08925ms to wait for elevateKubeSystemPrivileges
	W0408 04:47:36.268441    9805 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 04:47:36.268452    9805 kubeadm.go:393] duration metric: took 4m10.949799792s to StartCluster
	I0408 04:47:36.268463    9805 settings.go:142] acquiring lock: {Name:mkd5c8378547f472aec7259eff81e77b1454222f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:47:36.268545    9805 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:47:36.268962    9805 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/kubeconfig: {Name:mk04d6060f19666b377da34a3aa7f8b9bcbb5054 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:47:36.269183    9805 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:47:36.272852    9805 out.go:177] * Verifying Kubernetes components...
	I0408 04:47:36.269209    9805 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 04:47:36.269261    9805 config.go:182] Loaded profile config "stopped-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 04:47:36.280092    9805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 04:47:36.280093    9805 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-462000"
	I0408 04:47:36.280121    9805 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-462000"
	W0408 04:47:36.280125    9805 addons.go:243] addon storage-provisioner should already be in state true
	I0408 04:47:36.280096    9805 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-462000"
	I0408 04:47:36.280139    9805 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-462000"
	I0408 04:47:36.280144    9805 host.go:66] Checking if "stopped-upgrade-462000" exists ...
	I0408 04:47:36.280585    9805 retry.go:31] will retry after 1.205678631s: connect: dial unix /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/stopped-upgrade-462000/monitor: connect: connection refused
	I0408 04:47:36.285027    9805 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 04:47:36.289092    9805 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 04:47:36.289100    9805 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 04:47:36.289110    9805 sshutil.go:53] new ssh client: &{IP:localhost Port:51414 SSHKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/stopped-upgrade-462000/id_rsa Username:docker}
	I0408 04:47:36.379292    9805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 04:47:36.384771    9805 api_server.go:52] waiting for apiserver process to appear ...
	I0408 04:47:36.384810    9805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 04:47:36.388602    9805 api_server.go:72] duration metric: took 119.409208ms to wait for apiserver process to appear ...
	I0408 04:47:36.388611    9805 api_server.go:88] waiting for apiserver healthz status ...
	I0408 04:47:36.388618    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:36.463765    9805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 04:47:37.489307    9805 kapi.go:59] client config for stopped-upgrade-462000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/stopped-upgrade-462000/client.key", CAFile:"/Users/jenkins/minikube-integration/18588-7343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1039f7940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0408 04:47:37.489449    9805 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-462000"
	W0408 04:47:37.489454    9805 addons.go:243] addon default-storageclass should already be in state true
	I0408 04:47:37.489467    9805 host.go:66] Checking if "stopped-upgrade-462000" exists ...
	I0408 04:47:37.490185    9805 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 04:47:37.490191    9805 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 04:47:37.490197    9805 sshutil.go:53] new ssh client: &{IP:localhost Port:51414 SSHKeyPath:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/stopped-upgrade-462000/id_rsa Username:docker}
	I0408 04:47:37.529665    9805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 04:47:41.390708    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:41.390781    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:46.391180    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:46.391213    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:51.391550    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:51.391572    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:47:56.391980    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:47:56.392028    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:48:01.392516    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:48:01.392556    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:48:06.393386    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:48:06.393411    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0408 04:48:07.594580    9805 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0408 04:48:07.597828    9805 out.go:177] * Enabled addons: storage-provisioner
	I0408 04:48:07.604746    9805 addons.go:505] duration metric: took 31.335975292s for enable addons: enabled=[storage-provisioner]
	I0408 04:48:11.394390    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:48:11.394426    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:48:16.395736    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:48:16.395770    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:48:21.397350    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:48:21.397371    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:48:26.398707    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:48:26.398745    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:48:31.400878    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:48:31.400915    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:48:36.401870    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:48:36.401984    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:48:36.420698    9805 logs.go:276] 1 containers: [09f5134bbba6]
	I0408 04:48:36.420770    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:48:36.431167    9805 logs.go:276] 1 containers: [7e77108c8a17]
	I0408 04:48:36.431241    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:48:36.441781    9805 logs.go:276] 2 containers: [d71af50a0052 7fb2d5f97da3]
	I0408 04:48:36.441856    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:48:36.451996    9805 logs.go:276] 1 containers: [502f052bffe1]
	I0408 04:48:36.452060    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:48:36.462935    9805 logs.go:276] 1 containers: [c597ac644722]
	I0408 04:48:36.463011    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:48:36.474067    9805 logs.go:276] 1 containers: [06b1cd795371]
	I0408 04:48:36.474139    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:48:36.484521    9805 logs.go:276] 0 containers: []
	W0408 04:48:36.484533    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:48:36.484591    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:48:36.495221    9805 logs.go:276] 1 containers: [0fddf57d95d3]
	I0408 04:48:36.495235    9805 logs.go:123] Gathering logs for kube-apiserver [09f5134bbba6] ...
	I0408 04:48:36.495241    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09f5134bbba6"
	I0408 04:48:36.510213    9805 logs.go:123] Gathering logs for etcd [7e77108c8a17] ...
	I0408 04:48:36.510224    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e77108c8a17"
	I0408 04:48:36.524472    9805 logs.go:123] Gathering logs for coredns [d71af50a0052] ...
	I0408 04:48:36.524484    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71af50a0052"
	I0408 04:48:36.536218    9805 logs.go:123] Gathering logs for coredns [7fb2d5f97da3] ...
	I0408 04:48:36.536228    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb2d5f97da3"
	I0408 04:48:36.550973    9805 logs.go:123] Gathering logs for kube-proxy [c597ac644722] ...
	I0408 04:48:36.550984    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c597ac644722"
	I0408 04:48:36.571340    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:48:36.571354    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:48:36.605626    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:48:36.605723    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:48:36.606665    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:48:36.606672    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:48:36.611339    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:48:36.611346    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:48:36.646775    9805 logs.go:123] Gathering logs for storage-provisioner [0fddf57d95d3] ...
	I0408 04:48:36.646786    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fddf57d95d3"
	I0408 04:48:36.658454    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:48:36.658466    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:48:36.684230    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:48:36.684241    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:48:36.695518    9805 logs.go:123] Gathering logs for kube-scheduler [502f052bffe1] ...
	I0408 04:48:36.695533    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502f052bffe1"
	I0408 04:48:36.711454    9805 logs.go:123] Gathering logs for kube-controller-manager [06b1cd795371] ...
	I0408 04:48:36.711463    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06b1cd795371"
	I0408 04:48:36.730221    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:48:36.730232    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:48:36.730257    9805 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:48:36.730263    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:48:36.730267    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:48:36.730271    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:48:36.730274    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:48:46.734241    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:48:51.736440    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:48:51.736708    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:48:51.762931    9805 logs.go:276] 1 containers: [09f5134bbba6]
	I0408 04:48:51.763050    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:48:51.779232    9805 logs.go:276] 1 containers: [7e77108c8a17]
	I0408 04:48:51.779324    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:48:51.792467    9805 logs.go:276] 2 containers: [d71af50a0052 7fb2d5f97da3]
	I0408 04:48:51.792525    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:48:51.803956    9805 logs.go:276] 1 containers: [502f052bffe1]
	I0408 04:48:51.804029    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:48:51.814598    9805 logs.go:276] 1 containers: [c597ac644722]
	I0408 04:48:51.814678    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:48:51.824946    9805 logs.go:276] 1 containers: [06b1cd795371]
	I0408 04:48:51.825010    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:48:51.834950    9805 logs.go:276] 0 containers: []
	W0408 04:48:51.834961    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:48:51.835011    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:48:51.848442    9805 logs.go:276] 1 containers: [0fddf57d95d3]
	I0408 04:48:51.848456    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:48:51.848461    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:48:51.881831    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:48:51.881922    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:48:51.882861    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:48:51.882867    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:48:51.916907    9805 logs.go:123] Gathering logs for coredns [d71af50a0052] ...
	I0408 04:48:51.916918    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71af50a0052"
	I0408 04:48:51.933424    9805 logs.go:123] Gathering logs for kube-scheduler [502f052bffe1] ...
	I0408 04:48:51.933435    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502f052bffe1"
	I0408 04:48:51.948271    9805 logs.go:123] Gathering logs for kube-proxy [c597ac644722] ...
	I0408 04:48:51.948282    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c597ac644722"
	I0408 04:48:51.959649    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:48:51.959659    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:48:51.982196    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:48:51.982208    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:48:51.986680    9805 logs.go:123] Gathering logs for kube-apiserver [09f5134bbba6] ...
	I0408 04:48:51.986688    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09f5134bbba6"
	I0408 04:48:52.000158    9805 logs.go:123] Gathering logs for etcd [7e77108c8a17] ...
	I0408 04:48:52.000169    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e77108c8a17"
	I0408 04:48:52.013736    9805 logs.go:123] Gathering logs for coredns [7fb2d5f97da3] ...
	I0408 04:48:52.013750    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb2d5f97da3"
	I0408 04:48:52.030006    9805 logs.go:123] Gathering logs for kube-controller-manager [06b1cd795371] ...
	I0408 04:48:52.030018    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06b1cd795371"
	I0408 04:48:52.047185    9805 logs.go:123] Gathering logs for storage-provisioner [0fddf57d95d3] ...
	I0408 04:48:52.047195    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fddf57d95d3"
	I0408 04:48:52.058332    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:48:52.058345    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:48:52.070351    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:48:52.070364    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:48:52.070390    9805 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:48:52.070394    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:48:52.070397    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:48:52.070402    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:48:52.070405    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:49:02.072698    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:49:07.074662    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:49:07.075138    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:49:07.114308    9805 logs.go:276] 1 containers: [09f5134bbba6]
	I0408 04:49:07.114461    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:49:07.136862    9805 logs.go:276] 1 containers: [7e77108c8a17]
	I0408 04:49:07.136980    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:49:07.152039    9805 logs.go:276] 2 containers: [d71af50a0052 7fb2d5f97da3]
	I0408 04:49:07.152114    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:49:07.163988    9805 logs.go:276] 1 containers: [502f052bffe1]
	I0408 04:49:07.164059    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:49:07.175091    9805 logs.go:276] 1 containers: [c597ac644722]
	I0408 04:49:07.175157    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:49:07.185439    9805 logs.go:276] 1 containers: [06b1cd795371]
	I0408 04:49:07.185501    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:49:07.195386    9805 logs.go:276] 0 containers: []
	W0408 04:49:07.195397    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:49:07.195447    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:49:07.205951    9805 logs.go:276] 1 containers: [0fddf57d95d3]
	I0408 04:49:07.205965    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:49:07.205970    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:49:07.210643    9805 logs.go:123] Gathering logs for coredns [d71af50a0052] ...
	I0408 04:49:07.210652    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71af50a0052"
	I0408 04:49:07.221786    9805 logs.go:123] Gathering logs for kube-controller-manager [06b1cd795371] ...
	I0408 04:49:07.221796    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06b1cd795371"
	I0408 04:49:07.239384    9805 logs.go:123] Gathering logs for storage-provisioner [0fddf57d95d3] ...
	I0408 04:49:07.239397    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fddf57d95d3"
	I0408 04:49:07.250895    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:49:07.250910    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:49:07.262706    9805 logs.go:123] Gathering logs for kube-scheduler [502f052bffe1] ...
	I0408 04:49:07.262716    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502f052bffe1"
	I0408 04:49:07.277971    9805 logs.go:123] Gathering logs for kube-proxy [c597ac644722] ...
	I0408 04:49:07.277984    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c597ac644722"
	I0408 04:49:07.289744    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:49:07.289754    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:49:07.313623    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:49:07.313635    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:49:07.346512    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:49:07.346606    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:49:07.347603    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:49:07.347607    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:49:07.382416    9805 logs.go:123] Gathering logs for kube-apiserver [09f5134bbba6] ...
	I0408 04:49:07.382430    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09f5134bbba6"
	I0408 04:49:07.396971    9805 logs.go:123] Gathering logs for etcd [7e77108c8a17] ...
	I0408 04:49:07.396980    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e77108c8a17"
	I0408 04:49:07.410065    9805 logs.go:123] Gathering logs for coredns [7fb2d5f97da3] ...
	I0408 04:49:07.410079    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb2d5f97da3"
	I0408 04:49:07.422025    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:49:07.422035    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:49:07.422063    9805 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:49:07.422069    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:49:07.422073    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:49:07.422077    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:49:07.422080    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:49:17.426096    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:49:22.428671    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:49:22.429093    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:49:22.469800    9805 logs.go:276] 1 containers: [09f5134bbba6]
	I0408 04:49:22.469933    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:49:22.492015    9805 logs.go:276] 1 containers: [7e77108c8a17]
	I0408 04:49:22.492112    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:49:22.506946    9805 logs.go:276] 2 containers: [d71af50a0052 7fb2d5f97da3]
	I0408 04:49:22.507027    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:49:22.519624    9805 logs.go:276] 1 containers: [502f052bffe1]
	I0408 04:49:22.519692    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:49:22.530651    9805 logs.go:276] 1 containers: [c597ac644722]
	I0408 04:49:22.530725    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:49:22.544213    9805 logs.go:276] 1 containers: [06b1cd795371]
	I0408 04:49:22.544286    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:49:22.554758    9805 logs.go:276] 0 containers: []
	W0408 04:49:22.554770    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:49:22.554827    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:49:22.565425    9805 logs.go:276] 1 containers: [0fddf57d95d3]
	I0408 04:49:22.565443    9805 logs.go:123] Gathering logs for kube-proxy [c597ac644722] ...
	I0408 04:49:22.565449    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c597ac644722"
	I0408 04:49:22.577314    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:49:22.577324    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:49:22.613030    9805 logs.go:123] Gathering logs for kube-apiserver [09f5134bbba6] ...
	I0408 04:49:22.613042    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09f5134bbba6"
	I0408 04:49:22.627688    9805 logs.go:123] Gathering logs for etcd [7e77108c8a17] ...
	I0408 04:49:22.627699    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e77108c8a17"
	I0408 04:49:22.642970    9805 logs.go:123] Gathering logs for coredns [d71af50a0052] ...
	I0408 04:49:22.642979    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71af50a0052"
	I0408 04:49:22.654266    9805 logs.go:123] Gathering logs for coredns [7fb2d5f97da3] ...
	I0408 04:49:22.654277    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb2d5f97da3"
	I0408 04:49:22.666622    9805 logs.go:123] Gathering logs for kube-scheduler [502f052bffe1] ...
	I0408 04:49:22.666635    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502f052bffe1"
	I0408 04:49:22.682150    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:49:22.682159    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:49:22.715244    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:49:22.715338    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:49:22.716335    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:49:22.716339    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:49:22.720133    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:49:22.720142    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:49:22.743464    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:49:22.743472    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:49:22.759783    9805 logs.go:123] Gathering logs for kube-controller-manager [06b1cd795371] ...
	I0408 04:49:22.759794    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06b1cd795371"
	I0408 04:49:22.777701    9805 logs.go:123] Gathering logs for storage-provisioner [0fddf57d95d3] ...
	I0408 04:49:22.777713    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fddf57d95d3"
	I0408 04:49:22.790399    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:49:22.790409    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:49:22.790435    9805 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:49:22.790445    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:49:22.790452    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:49:22.790473    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:49:22.790480    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:49:32.793169    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:49:37.795375    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:49:37.795872    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:49:37.835748    9805 logs.go:276] 1 containers: [09f5134bbba6]
	I0408 04:49:37.835883    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:49:37.858460    9805 logs.go:276] 1 containers: [7e77108c8a17]
	I0408 04:49:37.858574    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:49:37.874097    9805 logs.go:276] 2 containers: [d71af50a0052 7fb2d5f97da3]
	I0408 04:49:37.874176    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:49:37.886138    9805 logs.go:276] 1 containers: [502f052bffe1]
	I0408 04:49:37.886206    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:49:37.897229    9805 logs.go:276] 1 containers: [c597ac644722]
	I0408 04:49:37.897300    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:49:37.908320    9805 logs.go:276] 1 containers: [06b1cd795371]
	I0408 04:49:37.908394    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:49:37.918416    9805 logs.go:276] 0 containers: []
	W0408 04:49:37.918427    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:49:37.918484    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:49:37.934466    9805 logs.go:276] 1 containers: [0fddf57d95d3]
	I0408 04:49:37.934483    9805 logs.go:123] Gathering logs for kube-controller-manager [06b1cd795371] ...
	I0408 04:49:37.934488    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06b1cd795371"
	I0408 04:49:37.951907    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:49:37.951917    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:49:37.986626    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:49:37.986726    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:49:37.987726    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:49:37.987731    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:49:37.992414    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:49:37.992424    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:49:38.035070    9805 logs.go:123] Gathering logs for coredns [d71af50a0052] ...
	I0408 04:49:38.035083    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71af50a0052"
	I0408 04:49:38.050562    9805 logs.go:123] Gathering logs for coredns [7fb2d5f97da3] ...
	I0408 04:49:38.050574    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb2d5f97da3"
	I0408 04:49:38.062310    9805 logs.go:123] Gathering logs for kube-scheduler [502f052bffe1] ...
	I0408 04:49:38.062319    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502f052bffe1"
	I0408 04:49:38.078021    9805 logs.go:123] Gathering logs for kube-proxy [c597ac644722] ...
	I0408 04:49:38.078035    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c597ac644722"
	I0408 04:49:38.091944    9805 logs.go:123] Gathering logs for storage-provisioner [0fddf57d95d3] ...
	I0408 04:49:38.091958    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fddf57d95d3"
	I0408 04:49:38.104001    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:49:38.104011    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:49:38.128174    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:49:38.128182    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:49:38.139993    9805 logs.go:123] Gathering logs for kube-apiserver [09f5134bbba6] ...
	I0408 04:49:38.140004    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09f5134bbba6"
	I0408 04:49:38.154458    9805 logs.go:123] Gathering logs for etcd [7e77108c8a17] ...
	I0408 04:49:38.154471    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e77108c8a17"
	I0408 04:49:38.168266    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:49:38.168275    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:49:38.168301    9805 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:49:38.168305    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:49:38.168308    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:49:38.168335    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:49:38.168339    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:49:48.171465    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:49:53.172883    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:49:53.173351    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:49:53.207502    9805 logs.go:276] 1 containers: [09f5134bbba6]
	I0408 04:49:53.207628    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:49:53.228611    9805 logs.go:276] 1 containers: [7e77108c8a17]
	I0408 04:49:53.228709    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:49:53.243770    9805 logs.go:276] 4 containers: [73b0a94922e9 773dd5a6812a d71af50a0052 7fb2d5f97da3]
	I0408 04:49:53.243854    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:49:53.255862    9805 logs.go:276] 1 containers: [502f052bffe1]
	I0408 04:49:53.255933    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:49:53.266521    9805 logs.go:276] 1 containers: [c597ac644722]
	I0408 04:49:53.266585    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:49:53.277438    9805 logs.go:276] 1 containers: [06b1cd795371]
	I0408 04:49:53.277514    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:49:53.287736    9805 logs.go:276] 0 containers: []
	W0408 04:49:53.287747    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:49:53.287804    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:49:53.299184    9805 logs.go:276] 1 containers: [0fddf57d95d3]
	I0408 04:49:53.299204    9805 logs.go:123] Gathering logs for kube-proxy [c597ac644722] ...
	I0408 04:49:53.299210    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c597ac644722"
	I0408 04:49:53.310804    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:49:53.310816    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:49:53.322146    9805 logs.go:123] Gathering logs for kube-apiserver [09f5134bbba6] ...
	I0408 04:49:53.322158    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09f5134bbba6"
	I0408 04:49:53.337940    9805 logs.go:123] Gathering logs for coredns [73b0a94922e9] ...
	I0408 04:49:53.337952    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b0a94922e9"
	I0408 04:49:53.349924    9805 logs.go:123] Gathering logs for storage-provisioner [0fddf57d95d3] ...
	I0408 04:49:53.349934    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fddf57d95d3"
	I0408 04:49:53.361912    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:49:53.361922    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:49:53.366166    9805 logs.go:123] Gathering logs for kube-scheduler [502f052bffe1] ...
	I0408 04:49:53.366177    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502f052bffe1"
	I0408 04:49:53.381084    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:49:53.381096    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:49:53.416656    9805 logs.go:123] Gathering logs for etcd [7e77108c8a17] ...
	I0408 04:49:53.416671    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e77108c8a17"
	I0408 04:49:53.430592    9805 logs.go:123] Gathering logs for coredns [773dd5a6812a] ...
	I0408 04:49:53.430605    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 773dd5a6812a"
	I0408 04:49:53.442871    9805 logs.go:123] Gathering logs for coredns [d71af50a0052] ...
	I0408 04:49:53.442881    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71af50a0052"
	I0408 04:49:53.454229    9805 logs.go:123] Gathering logs for coredns [7fb2d5f97da3] ...
	I0408 04:49:53.454238    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb2d5f97da3"
	I0408 04:49:53.465688    9805 logs.go:123] Gathering logs for kube-controller-manager [06b1cd795371] ...
	I0408 04:49:53.465696    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06b1cd795371"
	I0408 04:49:53.482874    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:49:53.482885    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:49:53.505777    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:49:53.505784    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:49:53.539137    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:49:53.539229    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:49:53.540199    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:49:53.540207    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:49:53.540229    9805 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:49:53.540233    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:49:53.540238    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:49:53.540241    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:49:53.540244    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:50:03.543117    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:50:08.545298    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:50:08.545672    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:50:08.576643    9805 logs.go:276] 1 containers: [09f5134bbba6]
	I0408 04:50:08.576761    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:50:08.600711    9805 logs.go:276] 1 containers: [7e77108c8a17]
	I0408 04:50:08.600811    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:50:08.618792    9805 logs.go:276] 4 containers: [73b0a94922e9 773dd5a6812a d71af50a0052 7fb2d5f97da3]
	I0408 04:50:08.618865    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:50:08.632410    9805 logs.go:276] 1 containers: [502f052bffe1]
	I0408 04:50:08.632474    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:50:08.643223    9805 logs.go:276] 1 containers: [c597ac644722]
	I0408 04:50:08.643290    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:50:08.654007    9805 logs.go:276] 1 containers: [06b1cd795371]
	I0408 04:50:08.654069    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:50:08.665273    9805 logs.go:276] 0 containers: []
	W0408 04:50:08.665288    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:50:08.665335    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:50:08.676125    9805 logs.go:276] 1 containers: [0fddf57d95d3]
	I0408 04:50:08.676143    9805 logs.go:123] Gathering logs for coredns [773dd5a6812a] ...
	I0408 04:50:08.676148    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 773dd5a6812a"
	I0408 04:50:08.687558    9805 logs.go:123] Gathering logs for kube-scheduler [502f052bffe1] ...
	I0408 04:50:08.687568    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502f052bffe1"
	I0408 04:50:08.702743    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:50:08.702754    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:50:08.706939    9805 logs.go:123] Gathering logs for coredns [7fb2d5f97da3] ...
	I0408 04:50:08.706951    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb2d5f97da3"
	I0408 04:50:08.718296    9805 logs.go:123] Gathering logs for kube-controller-manager [06b1cd795371] ...
	I0408 04:50:08.718309    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06b1cd795371"
	I0408 04:50:08.736443    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:50:08.736454    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:50:08.748333    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:50:08.748344    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:50:08.782663    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:50:08.782754    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:50:08.783693    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:50:08.783698    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:50:08.818003    9805 logs.go:123] Gathering logs for kube-apiserver [09f5134bbba6] ...
	I0408 04:50:08.818016    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09f5134bbba6"
	I0408 04:50:08.832432    9805 logs.go:123] Gathering logs for etcd [7e77108c8a17] ...
	I0408 04:50:08.832442    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e77108c8a17"
	I0408 04:50:08.846453    9805 logs.go:123] Gathering logs for storage-provisioner [0fddf57d95d3] ...
	I0408 04:50:08.846466    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fddf57d95d3"
	I0408 04:50:08.858112    9805 logs.go:123] Gathering logs for coredns [73b0a94922e9] ...
	I0408 04:50:08.858125    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b0a94922e9"
	I0408 04:50:08.873167    9805 logs.go:123] Gathering logs for coredns [d71af50a0052] ...
	I0408 04:50:08.873178    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71af50a0052"
	I0408 04:50:08.884313    9805 logs.go:123] Gathering logs for kube-proxy [c597ac644722] ...
	I0408 04:50:08.884324    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c597ac644722"
	I0408 04:50:08.895787    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:50:08.895801    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:50:08.919915    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:50:08.919923    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:50:08.919946    9805 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:50:08.919950    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:50:08.919953    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:50:08.919975    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:50:08.919980    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:50:18.920671    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:50:23.920143    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:50:23.920633    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:50:23.959231    9805 logs.go:276] 1 containers: [09f5134bbba6]
	I0408 04:50:23.959364    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:50:23.980118    9805 logs.go:276] 1 containers: [7e77108c8a17]
	I0408 04:50:23.980231    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:50:23.995797    9805 logs.go:276] 4 containers: [73b0a94922e9 773dd5a6812a d71af50a0052 7fb2d5f97da3]
	I0408 04:50:23.995874    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:50:24.008166    9805 logs.go:276] 1 containers: [502f052bffe1]
	I0408 04:50:24.008233    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:50:24.018984    9805 logs.go:276] 1 containers: [c597ac644722]
	I0408 04:50:24.019051    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:50:24.029877    9805 logs.go:276] 1 containers: [06b1cd795371]
	I0408 04:50:24.029943    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:50:24.040147    9805 logs.go:276] 0 containers: []
	W0408 04:50:24.040159    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:50:24.040211    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:50:24.050453    9805 logs.go:276] 1 containers: [0fddf57d95d3]
	I0408 04:50:24.050470    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:50:24.050476    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:50:24.085488    9805 logs.go:123] Gathering logs for storage-provisioner [0fddf57d95d3] ...
	I0408 04:50:24.085502    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fddf57d95d3"
	I0408 04:50:24.098118    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:50:24.098129    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:50:24.121714    9805 logs.go:123] Gathering logs for etcd [7e77108c8a17] ...
	I0408 04:50:24.121722    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e77108c8a17"
	I0408 04:50:24.135853    9805 logs.go:123] Gathering logs for coredns [773dd5a6812a] ...
	I0408 04:50:24.135862    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 773dd5a6812a"
	I0408 04:50:24.147697    9805 logs.go:123] Gathering logs for coredns [d71af50a0052] ...
	I0408 04:50:24.147709    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71af50a0052"
	I0408 04:50:24.159825    9805 logs.go:123] Gathering logs for coredns [7fb2d5f97da3] ...
	I0408 04:50:24.159834    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb2d5f97da3"
	I0408 04:50:24.171462    9805 logs.go:123] Gathering logs for kube-scheduler [502f052bffe1] ...
	I0408 04:50:24.171475    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502f052bffe1"
	I0408 04:50:24.186505    9805 logs.go:123] Gathering logs for kube-apiserver [09f5134bbba6] ...
	I0408 04:50:24.186517    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09f5134bbba6"
	I0408 04:50:24.201560    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:50:24.201574    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:50:24.213993    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:50:24.214005    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:50:24.248757    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:50:24.248848    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:50:24.249783    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:50:24.249787    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:50:24.254265    9805 logs.go:123] Gathering logs for coredns [73b0a94922e9] ...
	I0408 04:50:24.254273    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b0a94922e9"
	I0408 04:50:24.266481    9805 logs.go:123] Gathering logs for kube-proxy [c597ac644722] ...
	I0408 04:50:24.266494    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c597ac644722"
	I0408 04:50:24.278241    9805 logs.go:123] Gathering logs for kube-controller-manager [06b1cd795371] ...
	I0408 04:50:24.278250    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06b1cd795371"
	I0408 04:50:24.295653    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:50:24.295666    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:50:24.295690    9805 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:50:24.295694    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:50:24.295697    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:50:24.295718    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:50:24.295723    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:50:34.293191    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:50:39.294123    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:50:39.294551    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:50:39.332831    9805 logs.go:276] 1 containers: [09f5134bbba6]
	I0408 04:50:39.332972    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:50:39.357892    9805 logs.go:276] 1 containers: [7e77108c8a17]
	I0408 04:50:39.358010    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:50:39.374359    9805 logs.go:276] 4 containers: [73b0a94922e9 773dd5a6812a d71af50a0052 7fb2d5f97da3]
	I0408 04:50:39.374433    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:50:39.385984    9805 logs.go:276] 1 containers: [502f052bffe1]
	I0408 04:50:39.386047    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:50:39.396433    9805 logs.go:276] 1 containers: [c597ac644722]
	I0408 04:50:39.396507    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:50:39.406783    9805 logs.go:276] 1 containers: [06b1cd795371]
	I0408 04:50:39.406842    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:50:39.417277    9805 logs.go:276] 0 containers: []
	W0408 04:50:39.417290    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:50:39.417348    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:50:39.428203    9805 logs.go:276] 1 containers: [0fddf57d95d3]
	I0408 04:50:39.428218    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:50:39.428224    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:50:39.462448    9805 logs.go:123] Gathering logs for coredns [73b0a94922e9] ...
	I0408 04:50:39.462462    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b0a94922e9"
	I0408 04:50:39.474503    9805 logs.go:123] Gathering logs for coredns [773dd5a6812a] ...
	I0408 04:50:39.474517    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 773dd5a6812a"
	I0408 04:50:39.486385    9805 logs.go:123] Gathering logs for coredns [d71af50a0052] ...
	I0408 04:50:39.486399    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71af50a0052"
	I0408 04:50:39.498085    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:50:39.498099    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:50:39.510375    9805 logs.go:123] Gathering logs for kube-scheduler [502f052bffe1] ...
	I0408 04:50:39.510388    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502f052bffe1"
	I0408 04:50:39.525871    9805 logs.go:123] Gathering logs for kube-proxy [c597ac644722] ...
	I0408 04:50:39.525883    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c597ac644722"
	I0408 04:50:39.549090    9805 logs.go:123] Gathering logs for storage-provisioner [0fddf57d95d3] ...
	I0408 04:50:39.549103    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fddf57d95d3"
	I0408 04:50:39.560573    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:50:39.560586    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:50:39.584822    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:50:39.584832    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:50:39.619439    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:50:39.619532    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:50:39.620531    9805 logs.go:123] Gathering logs for etcd [7e77108c8a17] ...
	I0408 04:50:39.620538    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e77108c8a17"
	I0408 04:50:39.634552    9805 logs.go:123] Gathering logs for coredns [7fb2d5f97da3] ...
	I0408 04:50:39.634564    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb2d5f97da3"
	I0408 04:50:39.647110    9805 logs.go:123] Gathering logs for kube-controller-manager [06b1cd795371] ...
	I0408 04:50:39.647122    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06b1cd795371"
	I0408 04:50:39.671369    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:50:39.671401    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:50:39.675584    9805 logs.go:123] Gathering logs for kube-apiserver [09f5134bbba6] ...
	I0408 04:50:39.675592    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09f5134bbba6"
	I0408 04:50:39.690060    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:50:39.690072    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:50:39.690097    9805 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:50:39.690101    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:50:39.690104    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:50:39.690130    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:50:39.690134    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:50:49.691609    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:50:54.691285    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:50:54.691389    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:50:54.705563    9805 logs.go:276] 1 containers: [09f5134bbba6]
	I0408 04:50:54.705637    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:50:54.717083    9805 logs.go:276] 1 containers: [7e77108c8a17]
	I0408 04:50:54.717160    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:50:54.729217    9805 logs.go:276] 4 containers: [73b0a94922e9 773dd5a6812a d71af50a0052 7fb2d5f97da3]
	I0408 04:50:54.729297    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:50:54.739049    9805 logs.go:276] 1 containers: [502f052bffe1]
	I0408 04:50:54.739112    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:50:54.749657    9805 logs.go:276] 1 containers: [c597ac644722]
	I0408 04:50:54.749725    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:50:54.760123    9805 logs.go:276] 1 containers: [06b1cd795371]
	I0408 04:50:54.760191    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:50:54.770110    9805 logs.go:276] 0 containers: []
	W0408 04:50:54.770123    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:50:54.770181    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:50:54.780593    9805 logs.go:276] 1 containers: [0fddf57d95d3]
	I0408 04:50:54.780609    9805 logs.go:123] Gathering logs for kube-scheduler [502f052bffe1] ...
	I0408 04:50:54.780615    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502f052bffe1"
	I0408 04:50:54.796467    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:50:54.796480    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:50:54.807924    9805 logs.go:123] Gathering logs for kube-apiserver [09f5134bbba6] ...
	I0408 04:50:54.807936    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09f5134bbba6"
	I0408 04:50:54.823314    9805 logs.go:123] Gathering logs for etcd [7e77108c8a17] ...
	I0408 04:50:54.823324    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e77108c8a17"
	I0408 04:50:54.837454    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:50:54.837463    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:50:54.861186    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:50:54.861202    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:50:54.895499    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:50:54.895598    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:50:54.896597    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:50:54.896605    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:50:54.934192    9805 logs.go:123] Gathering logs for coredns [73b0a94922e9] ...
	I0408 04:50:54.934204    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b0a94922e9"
	I0408 04:50:54.947035    9805 logs.go:123] Gathering logs for coredns [d71af50a0052] ...
	I0408 04:50:54.947047    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71af50a0052"
	I0408 04:50:54.959935    9805 logs.go:123] Gathering logs for coredns [7fb2d5f97da3] ...
	I0408 04:50:54.959947    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb2d5f97da3"
	I0408 04:50:54.974186    9805 logs.go:123] Gathering logs for kube-proxy [c597ac644722] ...
	I0408 04:50:54.974197    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c597ac644722"
	I0408 04:50:54.987125    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:50:54.987138    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:50:54.991938    9805 logs.go:123] Gathering logs for coredns [773dd5a6812a] ...
	I0408 04:50:54.991944    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 773dd5a6812a"
	I0408 04:50:55.010099    9805 logs.go:123] Gathering logs for kube-controller-manager [06b1cd795371] ...
	I0408 04:50:55.010110    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06b1cd795371"
	I0408 04:50:55.028255    9805 logs.go:123] Gathering logs for storage-provisioner [0fddf57d95d3] ...
	I0408 04:50:55.028267    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fddf57d95d3"
	I0408 04:50:55.040175    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:50:55.040190    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:50:55.040220    9805 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:50:55.040225    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:50:55.040229    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:50:55.040237    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:50:55.040239    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:51:05.043437    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:51:10.045547    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:51:10.046037    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:51:10.092805    9805 logs.go:276] 1 containers: [09f5134bbba6]
	I0408 04:51:10.092935    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:51:10.114505    9805 logs.go:276] 1 containers: [7e77108c8a17]
	I0408 04:51:10.114605    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:51:10.131400    9805 logs.go:276] 4 containers: [73b0a94922e9 773dd5a6812a d71af50a0052 7fb2d5f97da3]
	I0408 04:51:10.131480    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:51:10.143406    9805 logs.go:276] 1 containers: [502f052bffe1]
	I0408 04:51:10.143473    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:51:10.154415    9805 logs.go:276] 1 containers: [c597ac644722]
	I0408 04:51:10.154476    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:51:10.169990    9805 logs.go:276] 1 containers: [06b1cd795371]
	I0408 04:51:10.170061    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:51:10.180666    9805 logs.go:276] 0 containers: []
	W0408 04:51:10.180677    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:51:10.180732    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:51:10.195506    9805 logs.go:276] 1 containers: [0fddf57d95d3]
	I0408 04:51:10.195523    9805 logs.go:123] Gathering logs for storage-provisioner [0fddf57d95d3] ...
	I0408 04:51:10.195528    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fddf57d95d3"
	I0408 04:51:10.207859    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:51:10.207872    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:51:10.220192    9805 logs.go:123] Gathering logs for kube-apiserver [09f5134bbba6] ...
	I0408 04:51:10.220205    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09f5134bbba6"
	I0408 04:51:10.235370    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:51:10.235383    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:51:10.270568    9805 logs.go:123] Gathering logs for coredns [d71af50a0052] ...
	I0408 04:51:10.270580    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71af50a0052"
	I0408 04:51:10.282428    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:51:10.282441    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:51:10.306007    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:51:10.306017    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:51:10.340317    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:51:10.340409    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:51:10.341375    9805 logs.go:123] Gathering logs for etcd [7e77108c8a17] ...
	I0408 04:51:10.341380    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e77108c8a17"
	I0408 04:51:10.359356    9805 logs.go:123] Gathering logs for coredns [7fb2d5f97da3] ...
	I0408 04:51:10.359369    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb2d5f97da3"
	I0408 04:51:10.371660    9805 logs.go:123] Gathering logs for kube-controller-manager [06b1cd795371] ...
	I0408 04:51:10.371674    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06b1cd795371"
	I0408 04:51:10.389355    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:51:10.389365    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:51:10.393951    9805 logs.go:123] Gathering logs for coredns [773dd5a6812a] ...
	I0408 04:51:10.393962    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 773dd5a6812a"
	I0408 04:51:10.406224    9805 logs.go:123] Gathering logs for kube-scheduler [502f052bffe1] ...
	I0408 04:51:10.406237    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502f052bffe1"
	I0408 04:51:10.421593    9805 logs.go:123] Gathering logs for kube-proxy [c597ac644722] ...
	I0408 04:51:10.421602    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c597ac644722"
	I0408 04:51:10.433624    9805 logs.go:123] Gathering logs for coredns [73b0a94922e9] ...
	I0408 04:51:10.433636    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b0a94922e9"
	I0408 04:51:10.445729    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:51:10.445740    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:51:10.445765    9805 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:51:10.445774    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:51:10.445778    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:51:10.445785    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:51:10.445787    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:51:20.448226    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:51:25.450357    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:51:25.450757    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 04:51:25.493310    9805 logs.go:276] 1 containers: [09f5134bbba6]
	I0408 04:51:25.493448    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 04:51:25.516071    9805 logs.go:276] 1 containers: [7e77108c8a17]
	I0408 04:51:25.516168    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 04:51:25.531757    9805 logs.go:276] 4 containers: [73b0a94922e9 773dd5a6812a d71af50a0052 7fb2d5f97da3]
	I0408 04:51:25.531835    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 04:51:25.544262    9805 logs.go:276] 1 containers: [502f052bffe1]
	I0408 04:51:25.544336    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 04:51:25.555039    9805 logs.go:276] 1 containers: [c597ac644722]
	I0408 04:51:25.555107    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 04:51:25.565646    9805 logs.go:276] 1 containers: [06b1cd795371]
	I0408 04:51:25.565711    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 04:51:25.579269    9805 logs.go:276] 0 containers: []
	W0408 04:51:25.579280    9805 logs.go:278] No container was found matching "kindnet"
	I0408 04:51:25.579338    9805 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 04:51:25.590263    9805 logs.go:276] 1 containers: [0fddf57d95d3]
	I0408 04:51:25.590281    9805 logs.go:123] Gathering logs for kube-proxy [c597ac644722] ...
	I0408 04:51:25.590286    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c597ac644722"
	I0408 04:51:25.603406    9805 logs.go:123] Gathering logs for kube-apiserver [09f5134bbba6] ...
	I0408 04:51:25.603420    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09f5134bbba6"
	I0408 04:51:25.619091    9805 logs.go:123] Gathering logs for coredns [773dd5a6812a] ...
	I0408 04:51:25.619104    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 773dd5a6812a"
	I0408 04:51:25.631566    9805 logs.go:123] Gathering logs for coredns [7fb2d5f97da3] ...
	I0408 04:51:25.631579    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fb2d5f97da3"
	I0408 04:51:25.643528    9805 logs.go:123] Gathering logs for kube-scheduler [502f052bffe1] ...
	I0408 04:51:25.643538    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502f052bffe1"
	I0408 04:51:25.659485    9805 logs.go:123] Gathering logs for kube-controller-manager [06b1cd795371] ...
	I0408 04:51:25.659497    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06b1cd795371"
	I0408 04:51:25.677685    9805 logs.go:123] Gathering logs for storage-provisioner [0fddf57d95d3] ...
	I0408 04:51:25.677696    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fddf57d95d3"
	I0408 04:51:25.690003    9805 logs.go:123] Gathering logs for Docker ...
	I0408 04:51:25.690014    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 04:51:25.713822    9805 logs.go:123] Gathering logs for container status ...
	I0408 04:51:25.713828    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 04:51:25.725816    9805 logs.go:123] Gathering logs for coredns [73b0a94922e9] ...
	I0408 04:51:25.725830    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b0a94922e9"
	I0408 04:51:25.738278    9805 logs.go:123] Gathering logs for dmesg ...
	I0408 04:51:25.738287    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 04:51:25.742398    9805 logs.go:123] Gathering logs for describe nodes ...
	I0408 04:51:25.742408    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 04:51:25.777620    9805 logs.go:123] Gathering logs for etcd [7e77108c8a17] ...
	I0408 04:51:25.777630    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e77108c8a17"
	I0408 04:51:25.798349    9805 logs.go:123] Gathering logs for kubelet ...
	I0408 04:51:25.798362    9805 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 04:51:25.831511    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:51:25.831616    9805 logs.go:138] Found kubelet problem: Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:51:25.832610    9805 logs.go:123] Gathering logs for coredns [d71af50a0052] ...
	I0408 04:51:25.832615    9805 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71af50a0052"
	I0408 04:51:25.844985    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:51:25.844995    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0408 04:51:25.845022    9805 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0408 04:51:25.845025    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: W0408 11:47:49.524924   10465 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	W0408 04:51:25.845030    9805 out.go:239]   Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	  Apr 08 11:47:49 stopped-upgrade-462000 kubelet[10465]: E0408 11:47:49.524995   10465 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-462000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-462000' and this object
	I0408 04:51:25.845035    9805 out.go:304] Setting ErrFile to fd 2...
	I0408 04:51:25.845038    9805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:51:35.848921    9805 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 04:51:40.851320    9805 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 04:51:40.857591    9805 out.go:177] 
	W0408 04:51:40.862649    9805 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0408 04:51:40.862671    9805 out.go:239] * 
	* 
	W0408 04:51:40.864592    9805 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:51:40.879523    9805 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-462000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (576.74s)

                                                
                                    
x
+
TestPause/serial/Start (9.95s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-254000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-254000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.886130167s)

                                                
                                                
-- stdout --
	* [pause-254000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-254000" primary control-plane node in "pause-254000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-254000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-254000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-254000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-254000 -n pause-254000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-254000 -n pause-254000: exit status 7 (61.993375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-254000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-196000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-196000 --driver=qemu2 : exit status 80 (9.818074417s)

                                                
                                                
-- stdout --
	* [NoKubernetes-196000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-196000" primary control-plane node in "NoKubernetes-196000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-196000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-196000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-196000 -n NoKubernetes-196000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-196000 -n NoKubernetes-196000: exit status 7 (46.307375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-196000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-196000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-196000 --no-kubernetes --driver=qemu2 : exit status 80 (5.228563083s)

                                                
                                                
-- stdout --
	* [NoKubernetes-196000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-196000
	* Restarting existing qemu2 VM for "NoKubernetes-196000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-196000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-196000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-196000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-196000 -n NoKubernetes-196000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-196000 -n NoKubernetes-196000: exit status 7 (63.395334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-196000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-196000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-196000 --no-kubernetes --driver=qemu2 : exit status 80 (5.231233167s)

                                                
                                                
-- stdout --
	* [NoKubernetes-196000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-196000
	* Restarting existing qemu2 VM for "NoKubernetes-196000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-196000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-196000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-196000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-196000 -n NoKubernetes-196000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-196000 -n NoKubernetes-196000: exit status 7 (49.857625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-196000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-196000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-196000 --driver=qemu2 : exit status 80 (5.250017208s)

                                                
                                                
-- stdout --
	* [NoKubernetes-196000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-196000
	* Restarting existing qemu2 VM for "NoKubernetes-196000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-196000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-196000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-196000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-196000 -n NoKubernetes-196000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-196000 -n NoKubernetes-196000: exit status 7 (52.953292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-196000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-146000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-146000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.686260125s)

                                                
                                                
-- stdout --
	* [auto-146000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-146000" primary control-plane node in "auto-146000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-146000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:49:55.918059   10045 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:49:55.918219   10045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:49:55.918223   10045 out.go:304] Setting ErrFile to fd 2...
	I0408 04:49:55.918225   10045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:49:55.918350   10045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:49:55.919453   10045 out.go:298] Setting JSON to false
	I0408 04:49:55.935837   10045 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6564,"bootTime":1712570431,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:49:55.935906   10045 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:49:55.940682   10045 out.go:177] * [auto-146000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:49:55.948690   10045 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:49:55.953649   10045 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:49:55.948763   10045 notify.go:220] Checking for updates...
	I0408 04:49:55.956658   10045 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:49:55.959642   10045 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:49:55.962625   10045 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:49:55.965652   10045 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:49:55.969024   10045 config.go:182] Loaded profile config "multinode-464000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:49:55.969094   10045 config.go:182] Loaded profile config "stopped-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 04:49:55.969142   10045 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:49:55.973621   10045 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:49:55.980658   10045 start.go:297] selected driver: qemu2
	I0408 04:49:55.980666   10045 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:49:55.980672   10045 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:49:55.982961   10045 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:49:55.986628   10045 out.go:177] * Automatically selected the socket_vmnet network
	I0408 04:49:55.989661   10045 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:49:55.989697   10045 cni.go:84] Creating CNI manager for ""
	I0408 04:49:55.989703   10045 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:49:55.989707   10045 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 04:49:55.989740   10045 start.go:340] cluster config:
	{Name:auto-146000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:auto-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:49:55.994299   10045 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:49:56.001629   10045 out.go:177] * Starting "auto-146000" primary control-plane node in "auto-146000" cluster
	I0408 04:49:56.005664   10045 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:49:56.005681   10045 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:49:56.005690   10045 cache.go:56] Caching tarball of preloaded images
	I0408 04:49:56.005745   10045 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:49:56.005753   10045 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:49:56.005839   10045 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/auto-146000/config.json ...
	I0408 04:49:56.005852   10045 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/auto-146000/config.json: {Name:mkc69a0746cc980d19fa639c6d775ee045ae5afb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:49:56.006091   10045 start.go:360] acquireMachinesLock for auto-146000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:49:56.006125   10045 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "auto-146000"
	I0408 04:49:56.006136   10045 start.go:93] Provisioning new machine with config: &{Name:auto-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:auto-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:49:56.006168   10045 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:49:56.014605   10045 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 04:49:56.030951   10045 start.go:159] libmachine.API.Create for "auto-146000" (driver="qemu2")
	I0408 04:49:56.030984   10045 client.go:168] LocalClient.Create starting
	I0408 04:49:56.031046   10045 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:49:56.031074   10045 main.go:141] libmachine: Decoding PEM data...
	I0408 04:49:56.031082   10045 main.go:141] libmachine: Parsing certificate...
	I0408 04:49:56.031121   10045 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:49:56.031143   10045 main.go:141] libmachine: Decoding PEM data...
	I0408 04:49:56.031153   10045 main.go:141] libmachine: Parsing certificate...
	I0408 04:49:56.031598   10045 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:49:56.176597   10045 main.go:141] libmachine: Creating SSH key...
	I0408 04:49:56.224243   10045 main.go:141] libmachine: Creating Disk image...
	I0408 04:49:56.224251   10045 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:49:56.224423   10045 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/auto-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/auto-146000/disk.qcow2
	I0408 04:49:56.236955   10045 main.go:141] libmachine: STDOUT: 
	I0408 04:49:56.236985   10045 main.go:141] libmachine: STDERR: 
	I0408 04:49:56.237035   10045 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/auto-146000/disk.qcow2 +20000M
	I0408 04:49:56.248173   10045 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:49:56.248201   10045 main.go:141] libmachine: STDERR: 
	I0408 04:49:56.248215   10045 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/auto-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/auto-146000/disk.qcow2
	I0408 04:49:56.248218   10045 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:49:56.248245   10045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/auto-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/auto-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/auto-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:0e:0e:1c:bb:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/auto-146000/disk.qcow2
	I0408 04:49:56.250049   10045 main.go:141] libmachine: STDOUT: 
	I0408 04:49:56.250064   10045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:49:56.250081   10045 client.go:171] duration metric: took 219.095375ms to LocalClient.Create
	I0408 04:49:58.252167   10045 start.go:128] duration metric: took 2.246016417s to createHost
	I0408 04:49:58.252207   10045 start.go:83] releasing machines lock for "auto-146000", held for 2.246107417s
	W0408 04:49:58.252255   10045 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:49:58.257717   10045 out.go:177] * Deleting "auto-146000" in qemu2 ...
	W0408 04:49:58.283256   10045 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:49:58.283275   10045 start.go:728] Will try again in 5 seconds ...
	I0408 04:50:03.283644   10045 start.go:360] acquireMachinesLock for auto-146000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:50:03.283937   10045 start.go:364] duration metric: took 245.5µs to acquireMachinesLock for "auto-146000"
	I0408 04:50:03.284005   10045 start.go:93] Provisioning new machine with config: &{Name:auto-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:auto-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:50:03.284127   10045 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:50:03.293434   10045 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 04:50:03.320834   10045 start.go:159] libmachine.API.Create for "auto-146000" (driver="qemu2")
	I0408 04:50:03.320870   10045 client.go:168] LocalClient.Create starting
	I0408 04:50:03.320951   10045 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:50:03.320993   10045 main.go:141] libmachine: Decoding PEM data...
	I0408 04:50:03.321005   10045 main.go:141] libmachine: Parsing certificate...
	I0408 04:50:03.321046   10045 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:50:03.321079   10045 main.go:141] libmachine: Decoding PEM data...
	I0408 04:50:03.321093   10045 main.go:141] libmachine: Parsing certificate...
	I0408 04:50:03.321451   10045 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:50:03.469886   10045 main.go:141] libmachine: Creating SSH key...
	I0408 04:50:03.510443   10045 main.go:141] libmachine: Creating Disk image...
	I0408 04:50:03.510448   10045 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:50:03.510624   10045 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/auto-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/auto-146000/disk.qcow2
	I0408 04:50:03.523162   10045 main.go:141] libmachine: STDOUT: 
	I0408 04:50:03.523198   10045 main.go:141] libmachine: STDERR: 
	I0408 04:50:03.523248   10045 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/auto-146000/disk.qcow2 +20000M
	I0408 04:50:03.535039   10045 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:50:03.535064   10045 main.go:141] libmachine: STDERR: 
	I0408 04:50:03.535077   10045 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/auto-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/auto-146000/disk.qcow2
	I0408 04:50:03.535087   10045 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:50:03.535137   10045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/auto-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/auto-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/auto-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:b7:ed:4b:63:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/auto-146000/disk.qcow2
	I0408 04:50:03.537002   10045 main.go:141] libmachine: STDOUT: 
	I0408 04:50:03.537017   10045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:50:03.537030   10045 client.go:171] duration metric: took 216.156542ms to LocalClient.Create
	I0408 04:50:05.539288   10045 start.go:128] duration metric: took 2.255147834s to createHost
	I0408 04:50:05.539394   10045 start.go:83] releasing machines lock for "auto-146000", held for 2.255475791s
	W0408 04:50:05.539654   10045 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:50:05.551243   10045 out.go:177] 
	W0408 04:50:05.555204   10045 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:50:05.555257   10045 out.go:239] * 
	* 
	W0408 04:50:05.558401   10045 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:50:05.562220   10045 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-146000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-146000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.834777041s)

                                                
                                                
-- stdout --
	* [kindnet-146000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-146000" primary control-plane node in "kindnet-146000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-146000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:50:07.900599   10155 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:50:07.900726   10155 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:50:07.900729   10155 out.go:304] Setting ErrFile to fd 2...
	I0408 04:50:07.900731   10155 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:50:07.900862   10155 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:50:07.901959   10155 out.go:298] Setting JSON to false
	I0408 04:50:07.918171   10155 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6576,"bootTime":1712570431,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:50:07.918234   10155 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:50:07.924387   10155 out.go:177] * [kindnet-146000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:50:07.930319   10155 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:50:07.933328   10155 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:50:07.930406   10155 notify.go:220] Checking for updates...
	I0408 04:50:07.936298   10155 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:50:07.939297   10155 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:50:07.942267   10155 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:50:07.945259   10155 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:50:07.948619   10155 config.go:182] Loaded profile config "multinode-464000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:50:07.948683   10155 config.go:182] Loaded profile config "stopped-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 04:50:07.948737   10155 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:50:07.953344   10155 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:50:07.960304   10155 start.go:297] selected driver: qemu2
	I0408 04:50:07.960312   10155 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:50:07.960319   10155 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:50:07.962665   10155 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:50:07.965331   10155 out.go:177] * Automatically selected the socket_vmnet network
	I0408 04:50:07.968387   10155 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:50:07.968429   10155 cni.go:84] Creating CNI manager for "kindnet"
	I0408 04:50:07.968439   10155 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0408 04:50:07.968474   10155 start.go:340] cluster config:
	{Name:kindnet-146000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kindnet-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:50:07.972826   10155 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:50:07.980326   10155 out.go:177] * Starting "kindnet-146000" primary control-plane node in "kindnet-146000" cluster
	I0408 04:50:07.984246   10155 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:50:07.984260   10155 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:50:07.984267   10155 cache.go:56] Caching tarball of preloaded images
	I0408 04:50:07.984313   10155 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:50:07.984318   10155 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:50:07.984376   10155 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/kindnet-146000/config.json ...
	I0408 04:50:07.984389   10155 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/kindnet-146000/config.json: {Name:mk86c5dd523849fff0914ff699f5ae42a6d34b8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:50:07.984606   10155 start.go:360] acquireMachinesLock for kindnet-146000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:50:07.984636   10155 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "kindnet-146000"
	I0408 04:50:07.984647   10155 start.go:93] Provisioning new machine with config: &{Name:kindnet-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kindnet-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:50:07.984675   10155 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:50:07.991391   10155 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 04:50:08.007834   10155 start.go:159] libmachine.API.Create for "kindnet-146000" (driver="qemu2")
	I0408 04:50:08.007864   10155 client.go:168] LocalClient.Create starting
	I0408 04:50:08.007922   10155 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:50:08.007949   10155 main.go:141] libmachine: Decoding PEM data...
	I0408 04:50:08.007957   10155 main.go:141] libmachine: Parsing certificate...
	I0408 04:50:08.007993   10155 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:50:08.008015   10155 main.go:141] libmachine: Decoding PEM data...
	I0408 04:50:08.008020   10155 main.go:141] libmachine: Parsing certificate...
	I0408 04:50:08.008362   10155 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:50:08.154323   10155 main.go:141] libmachine: Creating SSH key...
	I0408 04:50:08.225485   10155 main.go:141] libmachine: Creating Disk image...
	I0408 04:50:08.225492   10155 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:50:08.225679   10155 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kindnet-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kindnet-146000/disk.qcow2
	I0408 04:50:08.237983   10155 main.go:141] libmachine: STDOUT: 
	I0408 04:50:08.238014   10155 main.go:141] libmachine: STDERR: 
	I0408 04:50:08.238088   10155 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kindnet-146000/disk.qcow2 +20000M
	I0408 04:50:08.249156   10155 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:50:08.249175   10155 main.go:141] libmachine: STDERR: 
	I0408 04:50:08.249195   10155 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kindnet-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kindnet-146000/disk.qcow2
	I0408 04:50:08.249200   10155 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:50:08.249230   10155 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kindnet-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kindnet-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kindnet-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:83:32:4f:f2:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kindnet-146000/disk.qcow2
	I0408 04:50:08.250939   10155 main.go:141] libmachine: STDOUT: 
	I0408 04:50:08.250955   10155 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:50:08.250974   10155 client.go:171] duration metric: took 243.108083ms to LocalClient.Create
	I0408 04:50:10.253172   10155 start.go:128] duration metric: took 2.268492292s to createHost
	I0408 04:50:10.253247   10155 start.go:83] releasing machines lock for "kindnet-146000", held for 2.268633208s
	W0408 04:50:10.253392   10155 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:50:10.264642   10155 out.go:177] * Deleting "kindnet-146000" in qemu2 ...
	W0408 04:50:10.294515   10155 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:50:10.294548   10155 start.go:728] Will try again in 5 seconds ...
	I0408 04:50:15.295328   10155 start.go:360] acquireMachinesLock for kindnet-146000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:50:15.295940   10155 start.go:364] duration metric: took 449.208µs to acquireMachinesLock for "kindnet-146000"
	I0408 04:50:15.296106   10155 start.go:93] Provisioning new machine with config: &{Name:kindnet-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kindnet-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:50:15.296493   10155 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:50:15.302182   10155 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 04:50:15.351494   10155 start.go:159] libmachine.API.Create for "kindnet-146000" (driver="qemu2")
	I0408 04:50:15.351563   10155 client.go:168] LocalClient.Create starting
	I0408 04:50:15.351694   10155 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:50:15.351762   10155 main.go:141] libmachine: Decoding PEM data...
	I0408 04:50:15.351780   10155 main.go:141] libmachine: Parsing certificate...
	I0408 04:50:15.351850   10155 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:50:15.351893   10155 main.go:141] libmachine: Decoding PEM data...
	I0408 04:50:15.351907   10155 main.go:141] libmachine: Parsing certificate...
	I0408 04:50:15.352440   10155 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:50:15.507248   10155 main.go:141] libmachine: Creating SSH key...
	I0408 04:50:15.640691   10155 main.go:141] libmachine: Creating Disk image...
	I0408 04:50:15.640700   10155 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:50:15.640899   10155 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kindnet-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kindnet-146000/disk.qcow2
	I0408 04:50:15.653454   10155 main.go:141] libmachine: STDOUT: 
	I0408 04:50:15.653480   10155 main.go:141] libmachine: STDERR: 
	I0408 04:50:15.653547   10155 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kindnet-146000/disk.qcow2 +20000M
	I0408 04:50:15.664248   10155 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:50:15.664285   10155 main.go:141] libmachine: STDERR: 
	I0408 04:50:15.664304   10155 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kindnet-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kindnet-146000/disk.qcow2
	I0408 04:50:15.664313   10155 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:50:15.664353   10155 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kindnet-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kindnet-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kindnet-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:e9:de:6d:9c:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kindnet-146000/disk.qcow2
	I0408 04:50:15.666137   10155 main.go:141] libmachine: STDOUT: 
	I0408 04:50:15.666152   10155 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:50:15.666173   10155 client.go:171] duration metric: took 314.605792ms to LocalClient.Create
	I0408 04:50:17.668342   10155 start.go:128] duration metric: took 2.371846459s to createHost
	I0408 04:50:17.668406   10155 start.go:83] releasing machines lock for "kindnet-146000", held for 2.372476416s
	W0408 04:50:17.668801   10155 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:50:17.677362   10155 out.go:177] 
	W0408 04:50:17.682534   10155 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:50:17.682573   10155 out.go:239] * 
	* 
	W0408 04:50:17.684381   10155 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:50:17.695359   10155 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-146000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-146000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.9213965s)

                                                
                                                
-- stdout --
	* [calico-146000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-146000" primary control-plane node in "calico-146000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-146000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:50:20.135491   10269 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:50:20.135616   10269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:50:20.135620   10269 out.go:304] Setting ErrFile to fd 2...
	I0408 04:50:20.135622   10269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:50:20.135740   10269 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:50:20.136688   10269 out.go:298] Setting JSON to false
	I0408 04:50:20.153140   10269 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6589,"bootTime":1712570431,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:50:20.153203   10269 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:50:20.159274   10269 out.go:177] * [calico-146000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:50:20.175118   10269 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:50:20.168120   10269 notify.go:220] Checking for updates...
	I0408 04:50:20.183045   10269 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:50:20.187067   10269 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:50:20.190024   10269 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:50:20.193070   10269 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:50:20.196047   10269 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:50:20.204398   10269 config.go:182] Loaded profile config "multinode-464000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:50:20.204467   10269 config.go:182] Loaded profile config "stopped-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 04:50:20.204518   10269 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:50:20.209072   10269 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:50:20.216064   10269 start.go:297] selected driver: qemu2
	I0408 04:50:20.216071   10269 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:50:20.216078   10269 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:50:20.218440   10269 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:50:20.222068   10269 out.go:177] * Automatically selected the socket_vmnet network
	I0408 04:50:20.225203   10269 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:50:20.225251   10269 cni.go:84] Creating CNI manager for "calico"
	I0408 04:50:20.225259   10269 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0408 04:50:20.225309   10269 start.go:340] cluster config:
	{Name:calico-146000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:calico-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:50:20.230055   10269 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:50:20.237086   10269 out.go:177] * Starting "calico-146000" primary control-plane node in "calico-146000" cluster
	I0408 04:50:20.241031   10269 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:50:20.241046   10269 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:50:20.241054   10269 cache.go:56] Caching tarball of preloaded images
	I0408 04:50:20.241105   10269 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:50:20.241111   10269 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:50:20.241162   10269 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/calico-146000/config.json ...
	I0408 04:50:20.241174   10269 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/calico-146000/config.json: {Name:mk48392f3122693a18c428fc6cc2789de992a86e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:50:20.241413   10269 start.go:360] acquireMachinesLock for calico-146000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:50:20.241444   10269 start.go:364] duration metric: took 25.083µs to acquireMachinesLock for "calico-146000"
	I0408 04:50:20.241454   10269 start.go:93] Provisioning new machine with config: &{Name:calico-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:calico-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:50:20.241484   10269 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:50:20.250038   10269 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 04:50:20.267378   10269 start.go:159] libmachine.API.Create for "calico-146000" (driver="qemu2")
	I0408 04:50:20.267414   10269 client.go:168] LocalClient.Create starting
	I0408 04:50:20.267481   10269 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:50:20.267509   10269 main.go:141] libmachine: Decoding PEM data...
	I0408 04:50:20.267518   10269 main.go:141] libmachine: Parsing certificate...
	I0408 04:50:20.267551   10269 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:50:20.267574   10269 main.go:141] libmachine: Decoding PEM data...
	I0408 04:50:20.267580   10269 main.go:141] libmachine: Parsing certificate...
	I0408 04:50:20.268051   10269 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:50:20.411134   10269 main.go:141] libmachine: Creating SSH key...
	I0408 04:50:20.531717   10269 main.go:141] libmachine: Creating Disk image...
	I0408 04:50:20.531729   10269 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:50:20.531934   10269 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/calico-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/calico-146000/disk.qcow2
	I0408 04:50:20.544873   10269 main.go:141] libmachine: STDOUT: 
	I0408 04:50:20.544900   10269 main.go:141] libmachine: STDERR: 
	I0408 04:50:20.544953   10269 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/calico-146000/disk.qcow2 +20000M
	I0408 04:50:20.555941   10269 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:50:20.555957   10269 main.go:141] libmachine: STDERR: 
	I0408 04:50:20.555969   10269 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/calico-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/calico-146000/disk.qcow2
	I0408 04:50:20.555974   10269 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:50:20.556005   10269 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/calico-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/calico-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/calico-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:2e:f3:58:0c:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/calico-146000/disk.qcow2
	I0408 04:50:20.557774   10269 main.go:141] libmachine: STDOUT: 
	I0408 04:50:20.557790   10269 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:50:20.557810   10269 client.go:171] duration metric: took 290.391625ms to LocalClient.Create
	I0408 04:50:22.558129   10269 start.go:128] duration metric: took 2.318529792s to createHost
	I0408 04:50:22.558199   10269 start.go:83] releasing machines lock for "calico-146000", held for 2.318658458s
	W0408 04:50:22.558262   10269 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:50:22.568596   10269 out.go:177] * Deleting "calico-146000" in qemu2 ...
	W0408 04:50:22.602581   10269 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:50:22.602651   10269 start.go:728] Will try again in 5 seconds ...
	I0408 04:50:27.599673   10269 start.go:360] acquireMachinesLock for calico-146000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:50:27.599977   10269 start.go:364] duration metric: took 244.417µs to acquireMachinesLock for "calico-146000"
	I0408 04:50:27.600045   10269 start.go:93] Provisioning new machine with config: &{Name:calico-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:calico-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:50:27.600137   10269 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:50:27.608562   10269 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 04:50:27.637107   10269 start.go:159] libmachine.API.Create for "calico-146000" (driver="qemu2")
	I0408 04:50:27.637155   10269 client.go:168] LocalClient.Create starting
	I0408 04:50:27.637227   10269 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:50:27.637274   10269 main.go:141] libmachine: Decoding PEM data...
	I0408 04:50:27.637287   10269 main.go:141] libmachine: Parsing certificate...
	I0408 04:50:27.637334   10269 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:50:27.637365   10269 main.go:141] libmachine: Decoding PEM data...
	I0408 04:50:27.637373   10269 main.go:141] libmachine: Parsing certificate...
	I0408 04:50:27.637803   10269 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:50:27.785629   10269 main.go:141] libmachine: Creating SSH key...
	I0408 04:50:27.948775   10269 main.go:141] libmachine: Creating Disk image...
	I0408 04:50:27.948789   10269 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:50:27.948975   10269 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/calico-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/calico-146000/disk.qcow2
	I0408 04:50:27.961575   10269 main.go:141] libmachine: STDOUT: 
	I0408 04:50:27.961606   10269 main.go:141] libmachine: STDERR: 
	I0408 04:50:27.961675   10269 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/calico-146000/disk.qcow2 +20000M
	I0408 04:50:27.972529   10269 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:50:27.972546   10269 main.go:141] libmachine: STDERR: 
	I0408 04:50:27.972563   10269 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/calico-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/calico-146000/disk.qcow2
	I0408 04:50:27.972569   10269 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:50:27.972608   10269 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/calico-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/calico-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/calico-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:4d:c4:a9:41:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/calico-146000/disk.qcow2
	I0408 04:50:27.974305   10269 main.go:141] libmachine: STDOUT: 
	I0408 04:50:27.974322   10269 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:50:27.974339   10269 client.go:171] duration metric: took 337.395042ms to LocalClient.Create
	I0408 04:50:29.975345   10269 start.go:128] duration metric: took 2.376616791s to createHost
	I0408 04:50:29.975420   10269 start.go:83] releasing machines lock for "calico-146000", held for 2.376872834s
	W0408 04:50:29.975909   10269 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:50:29.983151   10269 out.go:177] 
	W0408 04:50:29.991289   10269 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:50:29.991319   10269 out.go:239] * 
	* 
	W0408 04:50:29.993811   10269 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:50:30.007152   10269 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (10.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-146000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-146000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (10.119340458s)

                                                
                                                
-- stdout --
	* [custom-flannel-146000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-146000" primary control-plane node in "custom-flannel-146000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-146000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:50:32.536838   10389 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:50:32.536973   10389 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:50:32.536977   10389 out.go:304] Setting ErrFile to fd 2...
	I0408 04:50:32.536982   10389 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:50:32.537109   10389 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:50:32.538206   10389 out.go:298] Setting JSON to false
	I0408 04:50:32.554457   10389 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6601,"bootTime":1712570431,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:50:32.554520   10389 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:50:32.562025   10389 out.go:177] * [custom-flannel-146000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:50:32.570812   10389 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:50:32.574809   10389 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:50:32.570849   10389 notify.go:220] Checking for updates...
	I0408 04:50:32.580797   10389 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:50:32.583804   10389 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:50:32.586738   10389 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:50:32.589777   10389 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:50:32.593090   10389 config.go:182] Loaded profile config "multinode-464000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:50:32.593158   10389 config.go:182] Loaded profile config "stopped-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 04:50:32.593207   10389 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:50:32.597756   10389 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:50:32.606794   10389 start.go:297] selected driver: qemu2
	I0408 04:50:32.606804   10389 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:50:32.606812   10389 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:50:32.609132   10389 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:50:32.612770   10389 out.go:177] * Automatically selected the socket_vmnet network
	I0408 04:50:32.615761   10389 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:50:32.615793   10389 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0408 04:50:32.615801   10389 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0408 04:50:32.615834   10389 start.go:340] cluster config:
	{Name:custom-flannel-146000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:50:32.620139   10389 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:50:32.627617   10389 out.go:177] * Starting "custom-flannel-146000" primary control-plane node in "custom-flannel-146000" cluster
	I0408 04:50:32.631737   10389 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:50:32.631750   10389 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:50:32.631756   10389 cache.go:56] Caching tarball of preloaded images
	I0408 04:50:32.631816   10389 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:50:32.631821   10389 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:50:32.631871   10389 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/custom-flannel-146000/config.json ...
	I0408 04:50:32.631882   10389 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/custom-flannel-146000/config.json: {Name:mka4190efe36f4078181f54144606caf95848f1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:50:32.632333   10389 start.go:360] acquireMachinesLock for custom-flannel-146000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:50:32.632368   10389 start.go:364] duration metric: took 28µs to acquireMachinesLock for "custom-flannel-146000"
	I0408 04:50:32.632378   10389 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:50:32.632403   10389 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:50:32.635705   10389 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 04:50:32.650163   10389 start.go:159] libmachine.API.Create for "custom-flannel-146000" (driver="qemu2")
	I0408 04:50:32.650186   10389 client.go:168] LocalClient.Create starting
	I0408 04:50:32.650253   10389 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:50:32.650282   10389 main.go:141] libmachine: Decoding PEM data...
	I0408 04:50:32.650290   10389 main.go:141] libmachine: Parsing certificate...
	I0408 04:50:32.650331   10389 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:50:32.650352   10389 main.go:141] libmachine: Decoding PEM data...
	I0408 04:50:32.650359   10389 main.go:141] libmachine: Parsing certificate...
	I0408 04:50:32.650704   10389 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:50:32.796932   10389 main.go:141] libmachine: Creating SSH key...
	I0408 04:50:33.081202   10389 main.go:141] libmachine: Creating Disk image...
	I0408 04:50:33.081214   10389 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:50:33.081412   10389 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/custom-flannel-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/custom-flannel-146000/disk.qcow2
	I0408 04:50:33.094176   10389 main.go:141] libmachine: STDOUT: 
	I0408 04:50:33.094200   10389 main.go:141] libmachine: STDERR: 
	I0408 04:50:33.094280   10389 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/custom-flannel-146000/disk.qcow2 +20000M
	I0408 04:50:33.105495   10389 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:50:33.105512   10389 main.go:141] libmachine: STDERR: 
	I0408 04:50:33.105535   10389 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/custom-flannel-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/custom-flannel-146000/disk.qcow2
	I0408 04:50:33.105547   10389 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:50:33.105585   10389 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/custom-flannel-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/custom-flannel-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/custom-flannel-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:48:ff:cf:75:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/custom-flannel-146000/disk.qcow2
	I0408 04:50:33.107354   10389 main.go:141] libmachine: STDOUT: 
	I0408 04:50:33.107372   10389 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:50:33.107393   10389 client.go:171] duration metric: took 457.414917ms to LocalClient.Create
	I0408 04:50:35.108738   10389 start.go:128] duration metric: took 2.477396375s to createHost
	I0408 04:50:35.108839   10389 start.go:83] releasing machines lock for "custom-flannel-146000", held for 2.477557792s
	W0408 04:50:35.108899   10389 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:50:35.122335   10389 out.go:177] * Deleting "custom-flannel-146000" in qemu2 ...
	W0408 04:50:35.150206   10389 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:50:35.150230   10389 start.go:728] Will try again in 5 seconds ...
	I0408 04:50:40.150802   10389 start.go:360] acquireMachinesLock for custom-flannel-146000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:50:40.151259   10389 start.go:364] duration metric: took 325.542µs to acquireMachinesLock for "custom-flannel-146000"
	I0408 04:50:40.151399   10389 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:50:40.151605   10389 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:50:40.159998   10389 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 04:50:40.199765   10389 start.go:159] libmachine.API.Create for "custom-flannel-146000" (driver="qemu2")
	I0408 04:50:40.199798   10389 client.go:168] LocalClient.Create starting
	I0408 04:50:40.199906   10389 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:50:40.199964   10389 main.go:141] libmachine: Decoding PEM data...
	I0408 04:50:40.199981   10389 main.go:141] libmachine: Parsing certificate...
	I0408 04:50:40.200045   10389 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:50:40.200082   10389 main.go:141] libmachine: Decoding PEM data...
	I0408 04:50:40.200095   10389 main.go:141] libmachine: Parsing certificate...
	I0408 04:50:40.200625   10389 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:50:40.353132   10389 main.go:141] libmachine: Creating SSH key...
	I0408 04:50:40.557750   10389 main.go:141] libmachine: Creating Disk image...
	I0408 04:50:40.557759   10389 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:50:40.557984   10389 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/custom-flannel-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/custom-flannel-146000/disk.qcow2
	I0408 04:50:40.571039   10389 main.go:141] libmachine: STDOUT: 
	I0408 04:50:40.571057   10389 main.go:141] libmachine: STDERR: 
	I0408 04:50:40.571133   10389 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/custom-flannel-146000/disk.qcow2 +20000M
	I0408 04:50:40.582399   10389 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:50:40.582422   10389 main.go:141] libmachine: STDERR: 
	I0408 04:50:40.582439   10389 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/custom-flannel-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/custom-flannel-146000/disk.qcow2
	I0408 04:50:40.582442   10389 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:50:40.582471   10389 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/custom-flannel-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/custom-flannel-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/custom-flannel-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:f3:03:56:49:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/custom-flannel-146000/disk.qcow2
	I0408 04:50:40.584523   10389 main.go:141] libmachine: STDOUT: 
	I0408 04:50:40.584542   10389 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:50:40.584562   10389 client.go:171] duration metric: took 384.875541ms to LocalClient.Create
	I0408 04:50:42.585805   10389 start.go:128] duration metric: took 2.43482s to createHost
	I0408 04:50:42.585907   10389 start.go:83] releasing machines lock for "custom-flannel-146000", held for 2.435299459s
	W0408 04:50:42.586285   10389 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:50:42.597849   10389 out.go:177] 
	W0408 04:50:42.600835   10389 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:50:42.600866   10389 out.go:239] * 
	* 
	W0408 04:50:42.602424   10389 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:50:42.611786   10389 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (10.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-146000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-146000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.785049917s)

                                                
                                                
-- stdout --
	* [false-146000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-146000" primary control-plane node in "false-146000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-146000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:50:45.100258   10512 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:50:45.100406   10512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:50:45.100413   10512 out.go:304] Setting ErrFile to fd 2...
	I0408 04:50:45.100415   10512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:50:45.100542   10512 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:50:45.101625   10512 out.go:298] Setting JSON to false
	I0408 04:50:45.118103   10512 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6614,"bootTime":1712570431,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:50:45.118171   10512 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:50:45.123082   10512 out.go:177] * [false-146000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:50:45.131005   10512 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:50:45.134996   10512 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:50:45.131064   10512 notify.go:220] Checking for updates...
	I0408 04:50:45.140958   10512 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:50:45.143984   10512 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:50:45.146907   10512 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:50:45.149962   10512 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:50:45.153272   10512 config.go:182] Loaded profile config "multinode-464000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:50:45.153343   10512 config.go:182] Loaded profile config "stopped-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 04:50:45.153390   10512 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:50:45.157921   10512 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:50:45.165011   10512 start.go:297] selected driver: qemu2
	I0408 04:50:45.165017   10512 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:50:45.165035   10512 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:50:45.167198   10512 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:50:45.169947   10512 out.go:177] * Automatically selected the socket_vmnet network
	I0408 04:50:45.173057   10512 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:50:45.173110   10512 cni.go:84] Creating CNI manager for "false"
	I0408 04:50:45.173148   10512 start.go:340] cluster config:
	{Name:false-146000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:false-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:50:45.177315   10512 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:50:45.184910   10512 out.go:177] * Starting "false-146000" primary control-plane node in "false-146000" cluster
	I0408 04:50:45.188980   10512 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:50:45.188997   10512 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:50:45.189007   10512 cache.go:56] Caching tarball of preloaded images
	I0408 04:50:45.189066   10512 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:50:45.189072   10512 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:50:45.189149   10512 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/false-146000/config.json ...
	I0408 04:50:45.189161   10512 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/false-146000/config.json: {Name:mk15dbdcf540be9c570271f170cfa9119f091960 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:50:45.189370   10512 start.go:360] acquireMachinesLock for false-146000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:50:45.189397   10512 start.go:364] duration metric: took 22.084µs to acquireMachinesLock for "false-146000"
	I0408 04:50:45.189406   10512 start.go:93] Provisioning new machine with config: &{Name:false-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:false-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:50:45.189432   10512 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:50:45.197988   10512 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 04:50:45.213400   10512 start.go:159] libmachine.API.Create for "false-146000" (driver="qemu2")
	I0408 04:50:45.213432   10512 client.go:168] LocalClient.Create starting
	I0408 04:50:45.213505   10512 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:50:45.213533   10512 main.go:141] libmachine: Decoding PEM data...
	I0408 04:50:45.213541   10512 main.go:141] libmachine: Parsing certificate...
	I0408 04:50:45.213575   10512 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:50:45.213596   10512 main.go:141] libmachine: Decoding PEM data...
	I0408 04:50:45.213604   10512 main.go:141] libmachine: Parsing certificate...
	I0408 04:50:45.214035   10512 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:50:45.358671   10512 main.go:141] libmachine: Creating SSH key...
	I0408 04:50:45.431062   10512 main.go:141] libmachine: Creating Disk image...
	I0408 04:50:45.431067   10512 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:50:45.431228   10512 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/false-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/false-146000/disk.qcow2
	I0408 04:50:45.443817   10512 main.go:141] libmachine: STDOUT: 
	I0408 04:50:45.443840   10512 main.go:141] libmachine: STDERR: 
	I0408 04:50:45.443890   10512 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/false-146000/disk.qcow2 +20000M
	I0408 04:50:45.454962   10512 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:50:45.454981   10512 main.go:141] libmachine: STDERR: 
	I0408 04:50:45.454998   10512 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/false-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/false-146000/disk.qcow2
	I0408 04:50:45.455003   10512 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:50:45.455041   10512 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/false-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/false-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/false-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:b3:93:c4:1b:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/false-146000/disk.qcow2
	I0408 04:50:45.456770   10512 main.go:141] libmachine: STDOUT: 
	I0408 04:50:45.456790   10512 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:50:45.456812   10512 client.go:171] duration metric: took 243.427291ms to LocalClient.Create
	I0408 04:50:47.458601   10512 start.go:128] duration metric: took 2.269607791s to createHost
	I0408 04:50:47.458687   10512 start.go:83] releasing machines lock for "false-146000", held for 2.269753875s
	W0408 04:50:47.458757   10512 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:50:47.470202   10512 out.go:177] * Deleting "false-146000" in qemu2 ...
	W0408 04:50:47.504036   10512 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:50:47.504066   10512 start.go:728] Will try again in 5 seconds ...
	I0408 04:50:52.505504   10512 start.go:360] acquireMachinesLock for false-146000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:50:52.506063   10512 start.go:364] duration metric: took 445.166µs to acquireMachinesLock for "false-146000"
	I0408 04:50:52.506231   10512 start.go:93] Provisioning new machine with config: &{Name:false-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:false-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:50:52.506528   10512 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:50:52.515121   10512 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 04:50:52.564528   10512 start.go:159] libmachine.API.Create for "false-146000" (driver="qemu2")
	I0408 04:50:52.564586   10512 client.go:168] LocalClient.Create starting
	I0408 04:50:52.564722   10512 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:50:52.564797   10512 main.go:141] libmachine: Decoding PEM data...
	I0408 04:50:52.564817   10512 main.go:141] libmachine: Parsing certificate...
	I0408 04:50:52.564896   10512 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:50:52.564938   10512 main.go:141] libmachine: Decoding PEM data...
	I0408 04:50:52.564954   10512 main.go:141] libmachine: Parsing certificate...
	I0408 04:50:52.565520   10512 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:50:52.718924   10512 main.go:141] libmachine: Creating SSH key...
	I0408 04:50:52.784122   10512 main.go:141] libmachine: Creating Disk image...
	I0408 04:50:52.784129   10512 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:50:52.784301   10512 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/false-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/false-146000/disk.qcow2
	I0408 04:50:52.796877   10512 main.go:141] libmachine: STDOUT: 
	I0408 04:50:52.796983   10512 main.go:141] libmachine: STDERR: 
	I0408 04:50:52.797045   10512 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/false-146000/disk.qcow2 +20000M
	I0408 04:50:52.808301   10512 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:50:52.808317   10512 main.go:141] libmachine: STDERR: 
	I0408 04:50:52.808332   10512 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/false-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/false-146000/disk.qcow2
	I0408 04:50:52.808335   10512 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:50:52.808367   10512 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/false-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/false-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/false-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:4e:92:af:b1:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/false-146000/disk.qcow2
	I0408 04:50:52.810153   10512 main.go:141] libmachine: STDOUT: 
	I0408 04:50:52.810291   10512 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:50:52.810304   10512 client.go:171] duration metric: took 245.744958ms to LocalClient.Create
	I0408 04:50:54.812143   10512 start.go:128] duration metric: took 2.305895916s to createHost
	I0408 04:50:54.812185   10512 start.go:83] releasing machines lock for "false-146000", held for 2.306416083s
	W0408 04:50:54.812270   10512 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:50:54.827443   10512 out.go:177] 
	W0408 04:50:54.830369   10512 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:50:54.830380   10512 out.go:239] * 
	* 
	W0408 04:50:54.830819   10512 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:50:54.845360   10512 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-146000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-146000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.787783s)

                                                
                                                
-- stdout --
	* [enable-default-cni-146000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-146000" primary control-plane node in "enable-default-cni-146000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-146000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:50:57.064113   10627 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:50:57.064243   10627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:50:57.064247   10627 out.go:304] Setting ErrFile to fd 2...
	I0408 04:50:57.064249   10627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:50:57.064373   10627 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:50:57.065450   10627 out.go:298] Setting JSON to false
	I0408 04:50:57.081717   10627 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6626,"bootTime":1712570431,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:50:57.081783   10627 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:50:57.087032   10627 out.go:177] * [enable-default-cni-146000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:50:57.093856   10627 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:50:57.093887   10627 notify.go:220] Checking for updates...
	I0408 04:50:57.100852   10627 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:50:57.103847   10627 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:50:57.106822   10627 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:50:57.109822   10627 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:50:57.112782   10627 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:50:57.116124   10627 config.go:182] Loaded profile config "multinode-464000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:50:57.116185   10627 config.go:182] Loaded profile config "stopped-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 04:50:57.116228   10627 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:50:57.119831   10627 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:50:57.126841   10627 start.go:297] selected driver: qemu2
	I0408 04:50:57.126848   10627 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:50:57.126854   10627 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:50:57.129027   10627 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:50:57.130699   10627 out.go:177] * Automatically selected the socket_vmnet network
	E0408 04:50:57.133868   10627 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0408 04:50:57.133881   10627 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:50:57.133913   10627 cni.go:84] Creating CNI manager for "bridge"
	I0408 04:50:57.133918   10627 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 04:50:57.133950   10627 start.go:340] cluster config:
	{Name:enable-default-cni-146000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:50:57.138004   10627 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:50:57.144814   10627 out.go:177] * Starting "enable-default-cni-146000" primary control-plane node in "enable-default-cni-146000" cluster
	I0408 04:50:57.148849   10627 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:50:57.148866   10627 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:50:57.148877   10627 cache.go:56] Caching tarball of preloaded images
	I0408 04:50:57.148935   10627 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:50:57.148941   10627 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:50:57.149006   10627 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/enable-default-cni-146000/config.json ...
	I0408 04:50:57.149018   10627 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/enable-default-cni-146000/config.json: {Name:mk43f916a4950f5df9aae176d7650d8bf95474ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:50:57.149347   10627 start.go:360] acquireMachinesLock for enable-default-cni-146000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:50:57.149375   10627 start.go:364] duration metric: took 22.375µs to acquireMachinesLock for "enable-default-cni-146000"
	I0408 04:50:57.149387   10627 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:50:57.149418   10627 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:50:57.156900   10627 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 04:50:57.171536   10627 start.go:159] libmachine.API.Create for "enable-default-cni-146000" (driver="qemu2")
	I0408 04:50:57.171565   10627 client.go:168] LocalClient.Create starting
	I0408 04:50:57.171629   10627 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:50:57.171656   10627 main.go:141] libmachine: Decoding PEM data...
	I0408 04:50:57.171664   10627 main.go:141] libmachine: Parsing certificate...
	I0408 04:50:57.171704   10627 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:50:57.171724   10627 main.go:141] libmachine: Decoding PEM data...
	I0408 04:50:57.171731   10627 main.go:141] libmachine: Parsing certificate...
	I0408 04:50:57.172085   10627 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:50:57.317441   10627 main.go:141] libmachine: Creating SSH key...
	I0408 04:50:57.392364   10627 main.go:141] libmachine: Creating Disk image...
	I0408 04:50:57.392369   10627 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:50:57.392541   10627 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/enable-default-cni-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/enable-default-cni-146000/disk.qcow2
	I0408 04:50:57.405237   10627 main.go:141] libmachine: STDOUT: 
	I0408 04:50:57.405266   10627 main.go:141] libmachine: STDERR: 
	I0408 04:50:57.405327   10627 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/enable-default-cni-146000/disk.qcow2 +20000M
	I0408 04:50:57.416031   10627 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:50:57.416048   10627 main.go:141] libmachine: STDERR: 
	I0408 04:50:57.416067   10627 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/enable-default-cni-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/enable-default-cni-146000/disk.qcow2
	I0408 04:50:57.416072   10627 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:50:57.416101   10627 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/enable-default-cni-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/enable-default-cni-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/enable-default-cni-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:37:bb:01:be:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/enable-default-cni-146000/disk.qcow2
	I0408 04:50:57.417833   10627 main.go:141] libmachine: STDOUT: 
	I0408 04:50:57.417848   10627 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:50:57.417868   10627 client.go:171] duration metric: took 246.32225ms to LocalClient.Create
	I0408 04:50:59.419907   10627 start.go:128] duration metric: took 2.270699791s to createHost
	I0408 04:50:59.419991   10627 start.go:83] releasing machines lock for "enable-default-cni-146000", held for 2.270845375s
	W0408 04:50:59.420207   10627 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:50:59.437336   10627 out.go:177] * Deleting "enable-default-cni-146000" in qemu2 ...
	W0408 04:50:59.468212   10627 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:50:59.468249   10627 start.go:728] Will try again in 5 seconds ...
	I0408 04:51:04.470118   10627 start.go:360] acquireMachinesLock for enable-default-cni-146000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:51:04.470718   10627 start.go:364] duration metric: took 460.458µs to acquireMachinesLock for "enable-default-cni-146000"
	I0408 04:51:04.470895   10627 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:51:04.471232   10627 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:51:04.479863   10627 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 04:51:04.528325   10627 start.go:159] libmachine.API.Create for "enable-default-cni-146000" (driver="qemu2")
	I0408 04:51:04.528377   10627 client.go:168] LocalClient.Create starting
	I0408 04:51:04.528499   10627 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:51:04.528582   10627 main.go:141] libmachine: Decoding PEM data...
	I0408 04:51:04.528595   10627 main.go:141] libmachine: Parsing certificate...
	I0408 04:51:04.528657   10627 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:51:04.528703   10627 main.go:141] libmachine: Decoding PEM data...
	I0408 04:51:04.528716   10627 main.go:141] libmachine: Parsing certificate...
	I0408 04:51:04.529262   10627 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:51:04.681918   10627 main.go:141] libmachine: Creating SSH key...
	I0408 04:51:04.759795   10627 main.go:141] libmachine: Creating Disk image...
	I0408 04:51:04.759802   10627 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:51:04.759981   10627 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/enable-default-cni-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/enable-default-cni-146000/disk.qcow2
	I0408 04:51:04.772468   10627 main.go:141] libmachine: STDOUT: 
	I0408 04:51:04.772488   10627 main.go:141] libmachine: STDERR: 
	I0408 04:51:04.772547   10627 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/enable-default-cni-146000/disk.qcow2 +20000M
	I0408 04:51:04.783824   10627 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:51:04.783841   10627 main.go:141] libmachine: STDERR: 
	I0408 04:51:04.783855   10627 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/enable-default-cni-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/enable-default-cni-146000/disk.qcow2
	I0408 04:51:04.783863   10627 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:51:04.783894   10627 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/enable-default-cni-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/enable-default-cni-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/enable-default-cni-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:a3:36:71:35:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/enable-default-cni-146000/disk.qcow2
	I0408 04:51:04.785573   10627 main.go:141] libmachine: STDOUT: 
	I0408 04:51:04.785588   10627 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:51:04.785607   10627 client.go:171] duration metric: took 257.242416ms to LocalClient.Create
	I0408 04:51:06.787677   10627 start.go:128] duration metric: took 2.316579875s to createHost
	I0408 04:51:06.787713   10627 start.go:83] releasing machines lock for "enable-default-cni-146000", held for 2.317136042s
	W0408 04:51:06.787845   10627 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:51:06.795206   10627 out.go:177] 
	W0408 04:51:06.799216   10627 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:51:06.799227   10627 out.go:239] * 
	* 
	W0408 04:51:06.800103   10627 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:51:06.812087   10627 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (10.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-146000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-146000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (10.16139825s)

                                                
                                                
-- stdout --
	* [flannel-146000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-146000" primary control-plane node in "flannel-146000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-146000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:51:09.014850   10740 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:51:09.014991   10740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:51:09.014997   10740 out.go:304] Setting ErrFile to fd 2...
	I0408 04:51:09.015000   10740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:51:09.015106   10740 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:51:09.016199   10740 out.go:298] Setting JSON to false
	I0408 04:51:09.032910   10740 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6638,"bootTime":1712570431,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:51:09.032971   10740 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:51:09.039906   10740 out.go:177] * [flannel-146000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:51:09.052682   10740 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:51:09.047963   10740 notify.go:220] Checking for updates...
	I0408 04:51:09.059785   10740 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:51:09.061016   10740 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:51:09.063787   10740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:51:09.066831   10740 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:51:09.069809   10740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:51:09.073256   10740 config.go:182] Loaded profile config "multinode-464000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:51:09.073327   10740 config.go:182] Loaded profile config "stopped-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 04:51:09.073373   10740 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:51:09.077839   10740 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:51:09.084794   10740 start.go:297] selected driver: qemu2
	I0408 04:51:09.084800   10740 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:51:09.084806   10740 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:51:09.087020   10740 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:51:09.089757   10740 out.go:177] * Automatically selected the socket_vmnet network
	I0408 04:51:09.092857   10740 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:51:09.092891   10740 cni.go:84] Creating CNI manager for "flannel"
	I0408 04:51:09.092895   10740 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0408 04:51:09.092922   10740 start.go:340] cluster config:
	{Name:flannel-146000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:flannel-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:51:09.097007   10740 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:51:09.103784   10740 out.go:177] * Starting "flannel-146000" primary control-plane node in "flannel-146000" cluster
	I0408 04:51:09.106735   10740 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:51:09.106751   10740 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:51:09.106758   10740 cache.go:56] Caching tarball of preloaded images
	I0408 04:51:09.106805   10740 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:51:09.106810   10740 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:51:09.106854   10740 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/flannel-146000/config.json ...
	I0408 04:51:09.106865   10740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/flannel-146000/config.json: {Name:mk84af5bae58e371e63b0b546cad44c1de489d03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:51:09.107071   10740 start.go:360] acquireMachinesLock for flannel-146000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:51:09.107098   10740 start.go:364] duration metric: took 22.708µs to acquireMachinesLock for "flannel-146000"
	I0408 04:51:09.107108   10740 start.go:93] Provisioning new machine with config: &{Name:flannel-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:flannel-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:51:09.107147   10740 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:51:09.114652   10740 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 04:51:09.129322   10740 start.go:159] libmachine.API.Create for "flannel-146000" (driver="qemu2")
	I0408 04:51:09.129345   10740 client.go:168] LocalClient.Create starting
	I0408 04:51:09.129419   10740 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:51:09.129449   10740 main.go:141] libmachine: Decoding PEM data...
	I0408 04:51:09.129459   10740 main.go:141] libmachine: Parsing certificate...
	I0408 04:51:09.129491   10740 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:51:09.129513   10740 main.go:141] libmachine: Decoding PEM data...
	I0408 04:51:09.129521   10740 main.go:141] libmachine: Parsing certificate...
	I0408 04:51:09.130082   10740 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:51:09.275660   10740 main.go:141] libmachine: Creating SSH key...
	I0408 04:51:09.479074   10740 main.go:141] libmachine: Creating Disk image...
	I0408 04:51:09.479086   10740 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:51:09.479297   10740 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/flannel-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/flannel-146000/disk.qcow2
	I0408 04:51:09.492069   10740 main.go:141] libmachine: STDOUT: 
	I0408 04:51:09.492089   10740 main.go:141] libmachine: STDERR: 
	I0408 04:51:09.492143   10740 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/flannel-146000/disk.qcow2 +20000M
	I0408 04:51:09.502884   10740 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:51:09.502899   10740 main.go:141] libmachine: STDERR: 
	I0408 04:51:09.502922   10740 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/flannel-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/flannel-146000/disk.qcow2
	I0408 04:51:09.502926   10740 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:51:09.502954   10740 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/flannel-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/flannel-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/flannel-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:e7:7c:37:07:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/flannel-146000/disk.qcow2
	I0408 04:51:09.504689   10740 main.go:141] libmachine: STDOUT: 
	I0408 04:51:09.504708   10740 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:51:09.504727   10740 client.go:171] duration metric: took 375.398834ms to LocalClient.Create
	I0408 04:51:11.506786   10740 start.go:128] duration metric: took 2.399755083s to createHost
	I0408 04:51:11.506845   10740 start.go:83] releasing machines lock for "flannel-146000", held for 2.399876959s
	W0408 04:51:11.506909   10740 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:51:11.516696   10740 out.go:177] * Deleting "flannel-146000" in qemu2 ...
	W0408 04:51:11.543313   10740 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:51:11.543337   10740 start.go:728] Will try again in 5 seconds ...
	I0408 04:51:16.545247   10740 start.go:360] acquireMachinesLock for flannel-146000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:51:16.545539   10740 start.go:364] duration metric: took 232.334µs to acquireMachinesLock for "flannel-146000"
	I0408 04:51:16.545615   10740 start.go:93] Provisioning new machine with config: &{Name:flannel-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:flannel-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:51:16.545742   10740 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:51:16.554339   10740 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 04:51:16.587610   10740 start.go:159] libmachine.API.Create for "flannel-146000" (driver="qemu2")
	I0408 04:51:16.587655   10740 client.go:168] LocalClient.Create starting
	I0408 04:51:16.587747   10740 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:51:16.587798   10740 main.go:141] libmachine: Decoding PEM data...
	I0408 04:51:16.587816   10740 main.go:141] libmachine: Parsing certificate...
	I0408 04:51:16.587870   10740 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:51:16.587906   10740 main.go:141] libmachine: Decoding PEM data...
	I0408 04:51:16.587917   10740 main.go:141] libmachine: Parsing certificate...
	I0408 04:51:16.588355   10740 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:51:16.736382   10740 main.go:141] libmachine: Creating SSH key...
	I0408 04:51:17.076240   10740 main.go:141] libmachine: Creating Disk image...
	I0408 04:51:17.076260   10740 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:51:17.076500   10740 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/flannel-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/flannel-146000/disk.qcow2
	I0408 04:51:17.089688   10740 main.go:141] libmachine: STDOUT: 
	I0408 04:51:17.089712   10740 main.go:141] libmachine: STDERR: 
	I0408 04:51:17.089770   10740 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/flannel-146000/disk.qcow2 +20000M
	I0408 04:51:17.100596   10740 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:51:17.100615   10740 main.go:141] libmachine: STDERR: 
	I0408 04:51:17.100625   10740 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/flannel-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/flannel-146000/disk.qcow2
	I0408 04:51:17.100631   10740 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:51:17.100665   10740 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/flannel-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/flannel-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/flannel-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:0c:0b:7b:27:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/flannel-146000/disk.qcow2
	I0408 04:51:17.102570   10740 main.go:141] libmachine: STDOUT: 
	I0408 04:51:17.102593   10740 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:51:17.102608   10740 client.go:171] duration metric: took 514.969167ms to LocalClient.Create
	I0408 04:51:19.104730   10740 start.go:128] duration metric: took 2.559052875s to createHost
	I0408 04:51:19.104798   10740 start.go:83] releasing machines lock for "flannel-146000", held for 2.55935225s
	W0408 04:51:19.105123   10740 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:51:19.112516   10740 out.go:177] 
	W0408 04:51:19.119635   10740 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:51:19.119660   10740 out.go:239] * 
	* 
	W0408 04:51:19.121660   10740 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:51:19.134517   10740 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (10.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-146000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-146000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.8215155s)

                                                
                                                
-- stdout --
	* [bridge-146000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-146000" primary control-plane node in "bridge-146000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-146000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:51:21.573118   10859 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:51:21.573256   10859 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:51:21.573259   10859 out.go:304] Setting ErrFile to fd 2...
	I0408 04:51:21.573262   10859 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:51:21.573397   10859 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:51:21.574476   10859 out.go:298] Setting JSON to false
	I0408 04:51:21.591273   10859 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6650,"bootTime":1712570431,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:51:21.591339   10859 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:51:21.598140   10859 out.go:177] * [bridge-146000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:51:21.606309   10859 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:51:21.609272   10859 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:51:21.606396   10859 notify.go:220] Checking for updates...
	I0408 04:51:21.615269   10859 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:51:21.618245   10859 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:51:21.621255   10859 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:51:21.624268   10859 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:51:21.626114   10859 config.go:182] Loaded profile config "multinode-464000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:51:21.626180   10859 config.go:182] Loaded profile config "stopped-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 04:51:21.626227   10859 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:51:21.630254   10859 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:51:21.637172   10859 start.go:297] selected driver: qemu2
	I0408 04:51:21.637179   10859 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:51:21.637184   10859 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:51:21.639452   10859 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:51:21.642220   10859 out.go:177] * Automatically selected the socket_vmnet network
	I0408 04:51:21.645380   10859 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:51:21.645433   10859 cni.go:84] Creating CNI manager for "bridge"
	I0408 04:51:21.645437   10859 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 04:51:21.645464   10859 start.go:340] cluster config:
	{Name:bridge-146000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:bridge-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:51:21.649740   10859 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:51:21.657287   10859 out.go:177] * Starting "bridge-146000" primary control-plane node in "bridge-146000" cluster
	I0408 04:51:21.661287   10859 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:51:21.661302   10859 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:51:21.661309   10859 cache.go:56] Caching tarball of preloaded images
	I0408 04:51:21.661356   10859 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:51:21.661361   10859 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:51:21.661401   10859 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/bridge-146000/config.json ...
	I0408 04:51:21.661416   10859 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/bridge-146000/config.json: {Name:mk9af0087d511b17c775c6dbb6ce8a813db37096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:51:21.661619   10859 start.go:360] acquireMachinesLock for bridge-146000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:51:21.661646   10859 start.go:364] duration metric: took 22.208µs to acquireMachinesLock for "bridge-146000"
	I0408 04:51:21.661655   10859 start.go:93] Provisioning new machine with config: &{Name:bridge-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:bridge-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:51:21.661680   10859 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:51:21.669286   10859 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 04:51:21.684060   10859 start.go:159] libmachine.API.Create for "bridge-146000" (driver="qemu2")
	I0408 04:51:21.684091   10859 client.go:168] LocalClient.Create starting
	I0408 04:51:21.684171   10859 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:51:21.684202   10859 main.go:141] libmachine: Decoding PEM data...
	I0408 04:51:21.684213   10859 main.go:141] libmachine: Parsing certificate...
	I0408 04:51:21.684247   10859 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:51:21.684268   10859 main.go:141] libmachine: Decoding PEM data...
	I0408 04:51:21.684275   10859 main.go:141] libmachine: Parsing certificate...
	I0408 04:51:21.684615   10859 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:51:21.829933   10859 main.go:141] libmachine: Creating SSH key...
	I0408 04:51:21.898836   10859 main.go:141] libmachine: Creating Disk image...
	I0408 04:51:21.898850   10859 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:51:21.899051   10859 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/bridge-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/bridge-146000/disk.qcow2
	I0408 04:51:21.911899   10859 main.go:141] libmachine: STDOUT: 
	I0408 04:51:21.911920   10859 main.go:141] libmachine: STDERR: 
	I0408 04:51:21.911973   10859 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/bridge-146000/disk.qcow2 +20000M
	I0408 04:51:21.923747   10859 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:51:21.923766   10859 main.go:141] libmachine: STDERR: 
	I0408 04:51:21.923794   10859 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/bridge-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/bridge-146000/disk.qcow2
	I0408 04:51:21.923800   10859 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:51:21.923834   10859 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/bridge-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/bridge-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/bridge-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:55:25:4e:6d:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/bridge-146000/disk.qcow2
	I0408 04:51:21.925793   10859 main.go:141] libmachine: STDOUT: 
	I0408 04:51:21.925808   10859 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:51:21.925827   10859 client.go:171] duration metric: took 241.738333ms to LocalClient.Create
	I0408 04:51:23.928054   10859 start.go:128] duration metric: took 2.26641575s to createHost
	I0408 04:51:23.928189   10859 start.go:83] releasing machines lock for "bridge-146000", held for 2.266615875s
	W0408 04:51:23.928233   10859 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:51:23.943980   10859 out.go:177] * Deleting "bridge-146000" in qemu2 ...
	W0408 04:51:23.969558   10859 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:51:23.969582   10859 start.go:728] Will try again in 5 seconds ...
	I0408 04:51:28.971529   10859 start.go:360] acquireMachinesLock for bridge-146000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:51:28.971758   10859 start.go:364] duration metric: took 193.958µs to acquireMachinesLock for "bridge-146000"
	I0408 04:51:28.971821   10859 start.go:93] Provisioning new machine with config: &{Name:bridge-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:bridge-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:51:28.971991   10859 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:51:28.976709   10859 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 04:51:29.008862   10859 start.go:159] libmachine.API.Create for "bridge-146000" (driver="qemu2")
	I0408 04:51:29.008903   10859 client.go:168] LocalClient.Create starting
	I0408 04:51:29.009004   10859 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:51:29.009067   10859 main.go:141] libmachine: Decoding PEM data...
	I0408 04:51:29.009084   10859 main.go:141] libmachine: Parsing certificate...
	I0408 04:51:29.009147   10859 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:51:29.009193   10859 main.go:141] libmachine: Decoding PEM data...
	I0408 04:51:29.009204   10859 main.go:141] libmachine: Parsing certificate...
	I0408 04:51:29.009887   10859 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:51:29.159997   10859 main.go:141] libmachine: Creating SSH key...
	I0408 04:51:29.291705   10859 main.go:141] libmachine: Creating Disk image...
	I0408 04:51:29.291711   10859 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:51:29.291915   10859 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/bridge-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/bridge-146000/disk.qcow2
	I0408 04:51:29.304528   10859 main.go:141] libmachine: STDOUT: 
	I0408 04:51:29.304556   10859 main.go:141] libmachine: STDERR: 
	I0408 04:51:29.304612   10859 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/bridge-146000/disk.qcow2 +20000M
	I0408 04:51:29.315671   10859 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:51:29.315688   10859 main.go:141] libmachine: STDERR: 
	I0408 04:51:29.315702   10859 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/bridge-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/bridge-146000/disk.qcow2
	I0408 04:51:29.315707   10859 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:51:29.315732   10859 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/bridge-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/bridge-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/bridge-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:e1:f0:08:d5:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/bridge-146000/disk.qcow2
	I0408 04:51:29.317567   10859 main.go:141] libmachine: STDOUT: 
	I0408 04:51:29.317581   10859 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:51:29.317597   10859 client.go:171] duration metric: took 308.69875ms to LocalClient.Create
	I0408 04:51:31.319736   10859 start.go:128] duration metric: took 2.347780625s to createHost
	I0408 04:51:31.319888   10859 start.go:83] releasing machines lock for "bridge-146000", held for 2.348159209s
	W0408 04:51:31.320263   10859 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:51:31.331832   10859 out.go:177] 
	W0408 04:51:31.335879   10859 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:51:31.335907   10859 out.go:239] * 
	* 
	W0408 04:51:31.338737   10859 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:51:31.349891   10859 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-146000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-146000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.872098625s)

                                                
                                                
-- stdout --
	* [kubenet-146000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-146000" primary control-plane node in "kubenet-146000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-146000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:51:33.676206   10974 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:51:33.676350   10974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:51:33.676353   10974 out.go:304] Setting ErrFile to fd 2...
	I0408 04:51:33.676356   10974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:51:33.676485   10974 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:51:33.677621   10974 out.go:298] Setting JSON to false
	I0408 04:51:33.693794   10974 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6662,"bootTime":1712570431,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:51:33.693855   10974 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:51:33.700201   10974 out.go:177] * [kubenet-146000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:51:33.709367   10974 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:51:33.713391   10974 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:51:33.709422   10974 notify.go:220] Checking for updates...
	I0408 04:51:33.719409   10974 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:51:33.722394   10974 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:51:33.725336   10974 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:51:33.728373   10974 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:51:33.730320   10974 config.go:182] Loaded profile config "multinode-464000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:51:33.730381   10974 config.go:182] Loaded profile config "stopped-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 04:51:33.730428   10974 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:51:33.734357   10974 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:51:33.741198   10974 start.go:297] selected driver: qemu2
	I0408 04:51:33.741203   10974 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:51:33.741213   10974 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:51:33.743381   10974 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:51:33.746359   10974 out.go:177] * Automatically selected the socket_vmnet network
	I0408 04:51:33.749409   10974 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:51:33.749441   10974 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0408 04:51:33.749468   10974 start.go:340] cluster config:
	{Name:kubenet-146000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kubenet-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:51:33.753598   10974 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:51:33.760351   10974 out.go:177] * Starting "kubenet-146000" primary control-plane node in "kubenet-146000" cluster
	I0408 04:51:33.764475   10974 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:51:33.764492   10974 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:51:33.764505   10974 cache.go:56] Caching tarball of preloaded images
	I0408 04:51:33.764552   10974 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:51:33.764559   10974 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:51:33.764609   10974 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/kubenet-146000/config.json ...
	I0408 04:51:33.764621   10974 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/kubenet-146000/config.json: {Name:mk59b73e4675aef5a8ea17938734a1602611d837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:51:33.764824   10974 start.go:360] acquireMachinesLock for kubenet-146000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:51:33.764853   10974 start.go:364] duration metric: took 22.333µs to acquireMachinesLock for "kubenet-146000"
	I0408 04:51:33.764864   10974 start.go:93] Provisioning new machine with config: &{Name:kubenet-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kubenet-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:51:33.764904   10974 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:51:33.769378   10974 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 04:51:33.784093   10974 start.go:159] libmachine.API.Create for "kubenet-146000" (driver="qemu2")
	I0408 04:51:33.784125   10974 client.go:168] LocalClient.Create starting
	I0408 04:51:33.784181   10974 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:51:33.784207   10974 main.go:141] libmachine: Decoding PEM data...
	I0408 04:51:33.784219   10974 main.go:141] libmachine: Parsing certificate...
	I0408 04:51:33.784257   10974 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:51:33.784278   10974 main.go:141] libmachine: Decoding PEM data...
	I0408 04:51:33.784283   10974 main.go:141] libmachine: Parsing certificate...
	I0408 04:51:33.784705   10974 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:51:33.928718   10974 main.go:141] libmachine: Creating SSH key...
	I0408 04:51:34.007183   10974 main.go:141] libmachine: Creating Disk image...
	I0408 04:51:34.007194   10974 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:51:34.007397   10974 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubenet-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubenet-146000/disk.qcow2
	I0408 04:51:34.020335   10974 main.go:141] libmachine: STDOUT: 
	I0408 04:51:34.020359   10974 main.go:141] libmachine: STDERR: 
	I0408 04:51:34.020413   10974 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubenet-146000/disk.qcow2 +20000M
	I0408 04:51:34.031136   10974 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:51:34.031152   10974 main.go:141] libmachine: STDERR: 
	I0408 04:51:34.031175   10974 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubenet-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubenet-146000/disk.qcow2
	I0408 04:51:34.031180   10974 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:51:34.031210   10974 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubenet-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubenet-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubenet-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:83:44:b0:19:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubenet-146000/disk.qcow2
	I0408 04:51:34.032896   10974 main.go:141] libmachine: STDOUT: 
	I0408 04:51:34.032913   10974 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:51:34.032933   10974 client.go:171] duration metric: took 248.808417ms to LocalClient.Create
	I0408 04:51:36.033228   10974 start.go:128] duration metric: took 2.268359709s to createHost
	I0408 04:51:36.033265   10974 start.go:83] releasing machines lock for "kubenet-146000", held for 2.268461s
	W0408 04:51:36.033297   10974 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:51:36.042676   10974 out.go:177] * Deleting "kubenet-146000" in qemu2 ...
	W0408 04:51:36.067578   10974 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:51:36.067598   10974 start.go:728] Will try again in 5 seconds ...
	I0408 04:51:41.069665   10974 start.go:360] acquireMachinesLock for kubenet-146000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:51:41.069772   10974 start.go:364] duration metric: took 75.5µs to acquireMachinesLock for "kubenet-146000"
	I0408 04:51:41.069798   10974 start.go:93] Provisioning new machine with config: &{Name:kubenet-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kubenet-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:51:41.069849   10974 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:51:41.078774   10974 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 04:51:41.094328   10974 start.go:159] libmachine.API.Create for "kubenet-146000" (driver="qemu2")
	I0408 04:51:41.094370   10974 client.go:168] LocalClient.Create starting
	I0408 04:51:41.094434   10974 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:51:41.094469   10974 main.go:141] libmachine: Decoding PEM data...
	I0408 04:51:41.094479   10974 main.go:141] libmachine: Parsing certificate...
	I0408 04:51:41.094513   10974 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:51:41.094534   10974 main.go:141] libmachine: Decoding PEM data...
	I0408 04:51:41.094539   10974 main.go:141] libmachine: Parsing certificate...
	I0408 04:51:41.094851   10974 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:51:41.325793   10974 main.go:141] libmachine: Creating SSH key...
	I0408 04:51:41.444188   10974 main.go:141] libmachine: Creating Disk image...
	I0408 04:51:41.444198   10974 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:51:41.444388   10974 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubenet-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubenet-146000/disk.qcow2
	I0408 04:51:41.457043   10974 main.go:141] libmachine: STDOUT: 
	I0408 04:51:41.457069   10974 main.go:141] libmachine: STDERR: 
	I0408 04:51:41.457152   10974 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubenet-146000/disk.qcow2 +20000M
	I0408 04:51:41.470199   10974 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:51:41.470217   10974 main.go:141] libmachine: STDERR: 
	I0408 04:51:41.470235   10974 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubenet-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubenet-146000/disk.qcow2
	I0408 04:51:41.470243   10974 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:51:41.470285   10974 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubenet-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubenet-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubenet-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:93:27:9f:5f:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/kubenet-146000/disk.qcow2
	I0408 04:51:41.472069   10974 main.go:141] libmachine: STDOUT: 
	I0408 04:51:41.472087   10974 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:51:41.472101   10974 client.go:171] duration metric: took 377.734208ms to LocalClient.Create
	I0408 04:51:43.474458   10974 start.go:128] duration metric: took 2.4046105s to createHost
	I0408 04:51:43.474578   10974 start.go:83] releasing machines lock for "kubenet-146000", held for 2.40484975s
	W0408 04:51:43.474925   10974 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:51:43.484523   10974 out.go:177] 
	W0408 04:51:43.488739   10974 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:51:43.488788   10974 out.go:239] * 
	* 
	W0408 04:51:43.491254   10974 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:51:43.503572   10974 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-820000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-820000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.840892958s)

                                                
                                                
-- stdout --
	* [old-k8s-version-820000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-820000" primary control-plane node in "old-k8s-version-820000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-820000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:51:45.837082   11091 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:51:45.837248   11091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:51:45.837252   11091 out.go:304] Setting ErrFile to fd 2...
	I0408 04:51:45.837254   11091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:51:45.837393   11091 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:51:45.838539   11091 out.go:298] Setting JSON to false
	I0408 04:51:45.855468   11091 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6674,"bootTime":1712570431,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:51:45.855541   11091 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:51:45.861850   11091 out.go:177] * [old-k8s-version-820000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:51:45.869214   11091 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:51:45.869256   11091 notify.go:220] Checking for updates...
	I0408 04:51:45.873840   11091 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:51:45.876753   11091 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:51:45.878475   11091 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:51:45.881803   11091 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:51:45.884788   11091 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:51:45.888130   11091 config.go:182] Loaded profile config "multinode-464000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:51:45.888193   11091 config.go:182] Loaded profile config "stopped-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 04:51:45.888236   11091 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:51:45.892770   11091 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:51:45.899741   11091 start.go:297] selected driver: qemu2
	I0408 04:51:45.899747   11091 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:51:45.899753   11091 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:51:45.902057   11091 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:51:45.904797   11091 out.go:177] * Automatically selected the socket_vmnet network
	I0408 04:51:45.907914   11091 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:51:45.907964   11091 cni.go:84] Creating CNI manager for ""
	I0408 04:51:45.907973   11091 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0408 04:51:45.908003   11091 start.go:340] cluster config:
	{Name:old-k8s-version-820000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-820000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:51:45.912586   11091 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:51:45.919736   11091 out.go:177] * Starting "old-k8s-version-820000" primary control-plane node in "old-k8s-version-820000" cluster
	I0408 04:51:45.923827   11091 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0408 04:51:45.923844   11091 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0408 04:51:45.923853   11091 cache.go:56] Caching tarball of preloaded images
	I0408 04:51:45.923935   11091 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:51:45.923944   11091 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0408 04:51:45.924003   11091 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/old-k8s-version-820000/config.json ...
	I0408 04:51:45.924017   11091 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/old-k8s-version-820000/config.json: {Name:mkb6091409c7441f0f7fad0f1f5d0cca8225f6a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:51:45.924521   11091 start.go:360] acquireMachinesLock for old-k8s-version-820000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:51:45.924559   11091 start.go:364] duration metric: took 29.125µs to acquireMachinesLock for "old-k8s-version-820000"
	I0408 04:51:45.924570   11091 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-820000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-820000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:51:45.924609   11091 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:51:45.932765   11091 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 04:51:45.950109   11091 start.go:159] libmachine.API.Create for "old-k8s-version-820000" (driver="qemu2")
	I0408 04:51:45.950134   11091 client.go:168] LocalClient.Create starting
	I0408 04:51:45.950193   11091 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:51:45.950234   11091 main.go:141] libmachine: Decoding PEM data...
	I0408 04:51:45.950243   11091 main.go:141] libmachine: Parsing certificate...
	I0408 04:51:45.950281   11091 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:51:45.950307   11091 main.go:141] libmachine: Decoding PEM data...
	I0408 04:51:45.950312   11091 main.go:141] libmachine: Parsing certificate...
	I0408 04:51:45.950764   11091 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:51:46.097352   11091 main.go:141] libmachine: Creating SSH key...
	I0408 04:51:46.156910   11091 main.go:141] libmachine: Creating Disk image...
	I0408 04:51:46.156917   11091 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:51:46.157093   11091 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/disk.qcow2
	I0408 04:51:46.169403   11091 main.go:141] libmachine: STDOUT: 
	I0408 04:51:46.169430   11091 main.go:141] libmachine: STDERR: 
	I0408 04:51:46.169485   11091 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/disk.qcow2 +20000M
	I0408 04:51:46.185057   11091 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:51:46.185077   11091 main.go:141] libmachine: STDERR: 
	I0408 04:51:46.185091   11091 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/disk.qcow2
	I0408 04:51:46.185095   11091 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:51:46.185127   11091 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:95:37:6c:be:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/disk.qcow2
	I0408 04:51:46.187307   11091 main.go:141] libmachine: STDOUT: 
	I0408 04:51:46.187325   11091 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:51:46.187347   11091 client.go:171] duration metric: took 237.212291ms to LocalClient.Create
	I0408 04:51:48.189393   11091 start.go:128] duration metric: took 2.264819209s to createHost
	I0408 04:51:48.189433   11091 start.go:83] releasing machines lock for "old-k8s-version-820000", held for 2.264916208s
	W0408 04:51:48.189459   11091 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:51:48.200974   11091 out.go:177] * Deleting "old-k8s-version-820000" in qemu2 ...
	W0408 04:51:48.214118   11091 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:51:48.214125   11091 start.go:728] Will try again in 5 seconds ...
	I0408 04:51:53.214630   11091 start.go:360] acquireMachinesLock for old-k8s-version-820000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:51:53.214975   11091 start.go:364] duration metric: took 249.625µs to acquireMachinesLock for "old-k8s-version-820000"
	I0408 04:51:53.215022   11091 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-820000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-820000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:51:53.215188   11091 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:51:53.223802   11091 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 04:51:53.255031   11091 start.go:159] libmachine.API.Create for "old-k8s-version-820000" (driver="qemu2")
	I0408 04:51:53.255097   11091 client.go:168] LocalClient.Create starting
	I0408 04:51:53.255230   11091 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:51:53.255284   11091 main.go:141] libmachine: Decoding PEM data...
	I0408 04:51:53.255296   11091 main.go:141] libmachine: Parsing certificate...
	I0408 04:51:53.255349   11091 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:51:53.255385   11091 main.go:141] libmachine: Decoding PEM data...
	I0408 04:51:53.255396   11091 main.go:141] libmachine: Parsing certificate...
	I0408 04:51:53.255817   11091 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:51:53.407881   11091 main.go:141] libmachine: Creating SSH key...
	I0408 04:51:53.579753   11091 main.go:141] libmachine: Creating Disk image...
	I0408 04:51:53.579762   11091 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:51:53.579980   11091 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/disk.qcow2
	I0408 04:51:53.592996   11091 main.go:141] libmachine: STDOUT: 
	I0408 04:51:53.593019   11091 main.go:141] libmachine: STDERR: 
	I0408 04:51:53.593090   11091 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/disk.qcow2 +20000M
	I0408 04:51:53.604026   11091 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:51:53.604042   11091 main.go:141] libmachine: STDERR: 
	I0408 04:51:53.604052   11091 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/disk.qcow2
	I0408 04:51:53.604056   11091 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:51:53.604082   11091 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:bd:83:a0:08:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/disk.qcow2
	I0408 04:51:53.605890   11091 main.go:141] libmachine: STDOUT: 
	I0408 04:51:53.605906   11091 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:51:53.605921   11091 client.go:171] duration metric: took 350.818042ms to LocalClient.Create
	I0408 04:51:55.608007   11091 start.go:128] duration metric: took 2.392842s to createHost
	I0408 04:51:55.608088   11091 start.go:83] releasing machines lock for "old-k8s-version-820000", held for 2.393132291s
	W0408 04:51:55.608334   11091 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-820000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-820000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:51:55.618716   11091 out.go:177] 
	W0408 04:51:55.626907   11091 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:51:55.626936   11091 out.go:239] * 
	* 
	W0408 04:51:55.628408   11091 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:51:55.637680   11091 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-820000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-820000 -n old-k8s-version-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-820000 -n old-k8s-version-820000: exit status 7 (51.764125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-820000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-820000 create -f testdata/busybox.yaml: exit status 1 (28.777416ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-820000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-820000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-820000 -n old-k8s-version-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-820000 -n old-k8s-version-820000: exit status 7 (31.341125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-820000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-820000 -n old-k8s-version-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-820000 -n old-k8s-version-820000: exit status 7 (31.77475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-820000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-820000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-820000 describe deploy/metrics-server -n kube-system: exit status 1 (27.323083ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-820000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-820000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-820000 -n old-k8s-version-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-820000 -n old-k8s-version-820000: exit status 7 (30.653125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-820000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-820000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.236639166s)

                                                
                                                
-- stdout --
	* [old-k8s-version-820000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-820000" primary control-plane node in "old-k8s-version-820000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-820000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-820000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:51:57.997432   11146 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:51:57.997567   11146 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:51:57.997571   11146 out.go:304] Setting ErrFile to fd 2...
	I0408 04:51:57.997574   11146 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:51:57.997696   11146 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:51:57.998824   11146 out.go:298] Setting JSON to false
	I0408 04:51:58.017001   11146 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6687,"bootTime":1712570431,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:51:58.017070   11146 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:51:58.025736   11146 out.go:177] * [old-k8s-version-820000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:51:58.033752   11146 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:51:58.029897   11146 notify.go:220] Checking for updates...
	I0408 04:51:58.044808   11146 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:51:58.056769   11146 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:51:58.067749   11146 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:51:58.071724   11146 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:51:58.078742   11146 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:51:58.086057   11146 config.go:182] Loaded profile config "old-k8s-version-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0408 04:51:58.093762   11146 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0408 04:51:58.099842   11146 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:51:58.103728   11146 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 04:51:58.109773   11146 start.go:297] selected driver: qemu2
	I0408 04:51:58.109783   11146 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-820000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-820000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:51:58.109848   11146 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:51:58.113343   11146 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:51:58.113453   11146 cni.go:84] Creating CNI manager for ""
	I0408 04:51:58.113516   11146 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0408 04:51:58.113553   11146 start.go:340] cluster config:
	{Name:old-k8s-version-820000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-820000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:51:58.118889   11146 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:51:58.126783   11146 out.go:177] * Starting "old-k8s-version-820000" primary control-plane node in "old-k8s-version-820000" cluster
	I0408 04:51:58.130726   11146 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0408 04:51:58.130749   11146 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0408 04:51:58.130758   11146 cache.go:56] Caching tarball of preloaded images
	I0408 04:51:58.130849   11146 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:51:58.130854   11146 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0408 04:51:58.130913   11146 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/old-k8s-version-820000/config.json ...
	I0408 04:51:58.131332   11146 start.go:360] acquireMachinesLock for old-k8s-version-820000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:51:58.131360   11146 start.go:364] duration metric: took 19.875µs to acquireMachinesLock for "old-k8s-version-820000"
	I0408 04:51:58.131369   11146 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:51:58.131374   11146 fix.go:54] fixHost starting: 
	I0408 04:51:58.131484   11146 fix.go:112] recreateIfNeeded on old-k8s-version-820000: state=Stopped err=<nil>
	W0408 04:51:58.131493   11146 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:51:58.134798   11146 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-820000" ...
	I0408 04:51:58.142826   11146 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:bd:83:a0:08:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/disk.qcow2
	I0408 04:51:58.144955   11146 main.go:141] libmachine: STDOUT: 
	I0408 04:51:58.144976   11146 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:51:58.145008   11146 fix.go:56] duration metric: took 13.632542ms for fixHost
	I0408 04:51:58.145012   11146 start.go:83] releasing machines lock for "old-k8s-version-820000", held for 13.647458ms
	W0408 04:51:58.145019   11146 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:51:58.145058   11146 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:51:58.145067   11146 start.go:728] Will try again in 5 seconds ...
	I0408 04:52:03.145647   11146 start.go:360] acquireMachinesLock for old-k8s-version-820000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:52:03.146057   11146 start.go:364] duration metric: took 316.375µs to acquireMachinesLock for "old-k8s-version-820000"
	I0408 04:52:03.146204   11146 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:52:03.146222   11146 fix.go:54] fixHost starting: 
	I0408 04:52:03.146833   11146 fix.go:112] recreateIfNeeded on old-k8s-version-820000: state=Stopped err=<nil>
	W0408 04:52:03.146855   11146 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:52:03.155488   11146 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-820000" ...
	I0408 04:52:03.159761   11146 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:bd:83:a0:08:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/old-k8s-version-820000/disk.qcow2
	I0408 04:52:03.167982   11146 main.go:141] libmachine: STDOUT: 
	I0408 04:52:03.168040   11146 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:52:03.168121   11146 fix.go:56] duration metric: took 21.900666ms for fixHost
	I0408 04:52:03.168137   11146 start.go:83] releasing machines lock for "old-k8s-version-820000", held for 22.062375ms
	W0408 04:52:03.168304   11146 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-820000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-820000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:52:03.176530   11146 out.go:177] 
	W0408 04:52:03.180559   11146 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:52:03.180583   11146 out.go:239] * 
	* 
	W0408 04:52:03.182759   11146 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:52:03.190436   11146 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-820000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-820000 -n old-k8s-version-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-820000 -n old-k8s-version-820000: exit status 7 (63.76975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-820000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-820000 -n old-k8s-version-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-820000 -n old-k8s-version-820000: exit status 7 (33.430541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-820000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-820000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-820000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.774667ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-820000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-820000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-820000 -n old-k8s-version-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-820000 -n old-k8s-version-820000: exit status 7 (32.782208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-820000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-820000 -n old-k8s-version-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-820000 -n old-k8s-version-820000: exit status 7 (31.843ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-820000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-820000 --alsologtostderr -v=1: exit status 83 (45.929125ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-820000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-820000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:52:03.470263   11167 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:52:03.471154   11167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:03.471158   11167 out.go:304] Setting ErrFile to fd 2...
	I0408 04:52:03.471160   11167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:03.471296   11167 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:52:03.471498   11167 out.go:298] Setting JSON to false
	I0408 04:52:03.471506   11167 mustload.go:65] Loading cluster: old-k8s-version-820000
	I0408 04:52:03.471697   11167 config.go:182] Loaded profile config "old-k8s-version-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0408 04:52:03.476577   11167 out.go:177] * The control-plane node old-k8s-version-820000 host is not running: state=Stopped
	I0408 04:52:03.480535   11167 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-820000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-820000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-820000 -n old-k8s-version-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-820000 -n old-k8s-version-820000: exit status 7 (31.647625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-820000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-820000 -n old-k8s-version-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-820000 -n old-k8s-version-820000: exit status 7 (31.210792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-272000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-rc.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-272000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-rc.0: exit status 80 (9.856212125s)

                                                
                                                
-- stdout --
	* [no-preload-272000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-272000" primary control-plane node in "no-preload-272000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-272000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:52:03.953486   11190 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:52:03.953626   11190 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:03.953629   11190 out.go:304] Setting ErrFile to fd 2...
	I0408 04:52:03.953632   11190 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:03.953774   11190 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:52:03.954784   11190 out.go:298] Setting JSON to false
	I0408 04:52:03.970867   11190 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6692,"bootTime":1712570431,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:52:03.970926   11190 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:52:03.975119   11190 out.go:177] * [no-preload-272000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:52:03.982019   11190 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:52:03.982073   11190 notify.go:220] Checking for updates...
	I0408 04:52:03.988996   11190 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:52:03.992001   11190 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:52:03.995003   11190 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:52:03.998032   11190 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:52:04.000985   11190 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:52:04.004333   11190 config.go:182] Loaded profile config "multinode-464000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:52:04.004395   11190 config.go:182] Loaded profile config "stopped-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 04:52:04.004450   11190 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:52:04.008967   11190 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:52:04.015944   11190 start.go:297] selected driver: qemu2
	I0408 04:52:04.015950   11190 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:52:04.015956   11190 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:52:04.018306   11190 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:52:04.021048   11190 out.go:177] * Automatically selected the socket_vmnet network
	I0408 04:52:04.024149   11190 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:52:04.024180   11190 cni.go:84] Creating CNI manager for ""
	I0408 04:52:04.024189   11190 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:52:04.024194   11190 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 04:52:04.024227   11190 start.go:340] cluster config:
	{Name:no-preload-272000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-272000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:52:04.028982   11190 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:04.035972   11190 out.go:177] * Starting "no-preload-272000" primary control-plane node in "no-preload-272000" cluster
	I0408 04:52:04.040003   11190 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime docker
	I0408 04:52:04.040070   11190 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/no-preload-272000/config.json ...
	I0408 04:52:04.040091   11190 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/no-preload-272000/config.json: {Name:mka2edaecbcd8d66f263e8f9b794d8f006dccade Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:52:04.040083   11190 cache.go:107] acquiring lock: {Name:mk5f2f2ca0de4b8bf5c9307e03a1b5b0cb505523 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:04.040089   11190 cache.go:107] acquiring lock: {Name:mk9877ffaea1c1634c0e03efe73e1284d9ba32bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:04.040104   11190 cache.go:107] acquiring lock: {Name:mk0aee2f1a22898f55fc982413b2a783b2bd87c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:04.040142   11190 cache.go:115] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0408 04:52:04.040146   11190 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 58.125µs
	I0408 04:52:04.040151   11190 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0408 04:52:04.040157   11190 cache.go:107] acquiring lock: {Name:mkc9e05e970544330f783b0aedf828edfa735e22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:04.040264   11190 cache.go:107] acquiring lock: {Name:mk67bb028e9dd4b9f5ee36e8e77536422614995d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:04.040269   11190 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 04:52:04.040302   11190 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 04:52:04.040315   11190 cache.go:107] acquiring lock: {Name:mkb14e720760971e2684cb2fc3878dd2588a068e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:04.040308   11190 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 04:52:04.040358   11190 cache.go:107] acquiring lock: {Name:mkab01c3059f4b440c9db81765c0898da649931e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:04.040401   11190 cache.go:107] acquiring lock: {Name:mk50d25e89241d05349defeddf842da380e65bbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:04.040483   11190 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0408 04:52:04.040521   11190 start.go:360] acquireMachinesLock for no-preload-272000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:52:04.040531   11190 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0408 04:52:04.040539   11190 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0408 04:52:04.040553   11190 start.go:364] duration metric: took 25.458µs to acquireMachinesLock for "no-preload-272000"
	I0408 04:52:04.040563   11190 start.go:93] Provisioning new machine with config: &{Name:no-preload-272000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-272000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:52:04.040599   11190 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:52:04.049040   11190 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 04:52:04.040680   11190 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 04:52:04.052614   11190 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 04:52:04.053292   11190 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0408 04:52:04.053352   11190 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 04:52:04.055737   11190 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 04:52:04.055848   11190 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0408 04:52:04.055991   11190 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 04:52:04.056057   11190 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0408 04:52:04.065198   11190 start.go:159] libmachine.API.Create for "no-preload-272000" (driver="qemu2")
	I0408 04:52:04.065221   11190 client.go:168] LocalClient.Create starting
	I0408 04:52:04.065280   11190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:52:04.065307   11190 main.go:141] libmachine: Decoding PEM data...
	I0408 04:52:04.065323   11190 main.go:141] libmachine: Parsing certificate...
	I0408 04:52:04.065360   11190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:52:04.065382   11190 main.go:141] libmachine: Decoding PEM data...
	I0408 04:52:04.065389   11190 main.go:141] libmachine: Parsing certificate...
	I0408 04:52:04.065723   11190 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:52:04.216652   11190 main.go:141] libmachine: Creating SSH key...
	I0408 04:52:04.286096   11190 main.go:141] libmachine: Creating Disk image...
	I0408 04:52:04.286116   11190 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:52:04.286314   11190 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/disk.qcow2
	I0408 04:52:04.299678   11190 main.go:141] libmachine: STDOUT: 
	I0408 04:52:04.299727   11190 main.go:141] libmachine: STDERR: 
	I0408 04:52:04.299830   11190 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/disk.qcow2 +20000M
	I0408 04:52:04.311190   11190 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:52:04.311217   11190 main.go:141] libmachine: STDERR: 
	I0408 04:52:04.311228   11190 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/disk.qcow2
	I0408 04:52:04.311233   11190 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:52:04.311279   11190 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:6d:2a:14:9f:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/disk.qcow2
	I0408 04:52:04.313475   11190 main.go:141] libmachine: STDOUT: 
	I0408 04:52:04.313493   11190 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:52:04.313514   11190 client.go:171] duration metric: took 248.291208ms to LocalClient.Create
	I0408 04:52:04.436425   11190 cache.go:162] opening:  /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0408 04:52:04.449478   11190 cache.go:162] opening:  /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0
	I0408 04:52:04.464365   11190 cache.go:162] opening:  /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-rc.0
	I0408 04:52:04.475903   11190 cache.go:162] opening:  /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0
	I0408 04:52:04.490418   11190 cache.go:162] opening:  /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0408 04:52:04.504988   11190 cache.go:162] opening:  /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0
	I0408 04:52:04.538993   11190 cache.go:162] opening:  /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0408 04:52:04.578879   11190 cache.go:157] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0408 04:52:04.578892   11190 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 538.586125ms
	I0408 04:52:04.578902   11190 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0408 04:52:06.313786   11190 start.go:128] duration metric: took 2.273200417s to createHost
	I0408 04:52:06.313862   11190 start.go:83] releasing machines lock for "no-preload-272000", held for 2.273339125s
	W0408 04:52:06.313922   11190 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:52:06.323710   11190 out.go:177] * Deleting "no-preload-272000" in qemu2 ...
	W0408 04:52:06.345298   11190 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:52:06.345339   11190 start.go:728] Will try again in 5 seconds ...
	I0408 04:52:06.655851   11190 cache.go:157] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-rc.0 exists
	I0408 04:52:06.655878   11190 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0-rc.0" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-rc.0" took 2.615767125s
	I0408 04:52:06.655894   11190 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0-rc.0 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-rc.0 succeeded
	I0408 04:52:06.775388   11190 cache.go:157] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0408 04:52:06.775409   11190 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 2.735117667s
	I0408 04:52:06.775423   11190 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0408 04:52:08.165799   11190 cache.go:157] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0 exists
	I0408 04:52:08.165824   11190 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0" took 4.125820041s
	I0408 04:52:08.165840   11190 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0-rc.0 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0 succeeded
	I0408 04:52:08.267124   11190 cache.go:157] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0 exists
	I0408 04:52:08.267158   11190 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0" took 4.227139458s
	I0408 04:52:08.267173   11190 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0-rc.0 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0 succeeded
	I0408 04:52:08.633998   11190 cache.go:157] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0 exists
	I0408 04:52:08.634019   11190 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0" took 4.593797167s
	I0408 04:52:08.634041   11190 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0-rc.0 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0 succeeded
	I0408 04:52:11.345516   11190 start.go:360] acquireMachinesLock for no-preload-272000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:52:11.345989   11190 start.go:364] duration metric: took 393.875µs to acquireMachinesLock for "no-preload-272000"
	I0408 04:52:11.346131   11190 start.go:93] Provisioning new machine with config: &{Name:no-preload-272000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-272000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:52:11.346386   11190 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:52:11.356857   11190 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 04:52:11.382648   11190 cache.go:157] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0408 04:52:11.382717   11190 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 7.3425985s
	I0408 04:52:11.382738   11190 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0408 04:52:11.382780   11190 cache.go:87] Successfully saved all images to host disk.
	I0408 04:52:11.397758   11190 start.go:159] libmachine.API.Create for "no-preload-272000" (driver="qemu2")
	I0408 04:52:11.397808   11190 client.go:168] LocalClient.Create starting
	I0408 04:52:11.397974   11190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:52:11.398044   11190 main.go:141] libmachine: Decoding PEM data...
	I0408 04:52:11.398069   11190 main.go:141] libmachine: Parsing certificate...
	I0408 04:52:11.398130   11190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:52:11.398171   11190 main.go:141] libmachine: Decoding PEM data...
	I0408 04:52:11.398185   11190 main.go:141] libmachine: Parsing certificate...
	I0408 04:52:11.398620   11190 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:52:11.549746   11190 main.go:141] libmachine: Creating SSH key...
	I0408 04:52:11.703769   11190 main.go:141] libmachine: Creating Disk image...
	I0408 04:52:11.703779   11190 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:52:11.703995   11190 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/disk.qcow2
	I0408 04:52:11.717145   11190 main.go:141] libmachine: STDOUT: 
	I0408 04:52:11.717171   11190 main.go:141] libmachine: STDERR: 
	I0408 04:52:11.717230   11190 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/disk.qcow2 +20000M
	I0408 04:52:11.728403   11190 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:52:11.728426   11190 main.go:141] libmachine: STDERR: 
	I0408 04:52:11.728450   11190 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/disk.qcow2
	I0408 04:52:11.728459   11190 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:52:11.728491   11190 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:c9:6a:7b:08:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/disk.qcow2
	I0408 04:52:11.730375   11190 main.go:141] libmachine: STDOUT: 
	I0408 04:52:11.730391   11190 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:52:11.730403   11190 client.go:171] duration metric: took 332.586167ms to LocalClient.Create
	I0408 04:52:13.732580   11190 start.go:128] duration metric: took 2.38620425s to createHost
	I0408 04:52:13.732692   11190 start.go:83] releasing machines lock for "no-preload-272000", held for 2.386719417s
	W0408 04:52:13.733123   11190 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-272000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-272000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:52:13.742790   11190 out.go:177] 
	W0408 04:52:13.750846   11190 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:52:13.750876   11190 out.go:239] * 
	* 
	W0408 04:52:13.753612   11190 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:52:13.766708   11190 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-272000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000: exit status 7 (60.816958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-272000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-272000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-272000 create -f testdata/busybox.yaml: exit status 1 (30.500458ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-272000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-272000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000: exit status 7 (35.00025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-272000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000: exit status 7 (34.644792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-272000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-272000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-272000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-272000 describe deploy/metrics-server -n kube-system: exit status 1 (26.953875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-272000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-272000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000: exit status 7 (30.756083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-272000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-967000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-967000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (10.024816583s)

                                                
                                                
-- stdout --
	* [embed-certs-967000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-967000" primary control-plane node in "embed-certs-967000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-967000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:52:13.991067   11248 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:52:13.991200   11248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:13.991203   11248 out.go:304] Setting ErrFile to fd 2...
	I0408 04:52:13.991205   11248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:13.991333   11248 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:52:13.992358   11248 out.go:298] Setting JSON to false
	I0408 04:52:14.009939   11248 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6702,"bootTime":1712570431,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:52:14.010013   11248 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:52:14.014818   11248 out.go:177] * [embed-certs-967000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:52:14.021884   11248 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:52:14.024792   11248 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:52:14.021944   11248 notify.go:220] Checking for updates...
	I0408 04:52:14.031819   11248 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:52:14.034751   11248 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:52:14.037799   11248 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:52:14.040835   11248 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:52:14.044067   11248 config.go:182] Loaded profile config "multinode-464000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:52:14.044137   11248 config.go:182] Loaded profile config "no-preload-272000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-rc.0
	I0408 04:52:14.044190   11248 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:52:14.048780   11248 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:52:14.055781   11248 start.go:297] selected driver: qemu2
	I0408 04:52:14.055789   11248 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:52:14.055795   11248 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:52:14.058170   11248 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:52:14.061857   11248 out.go:177] * Automatically selected the socket_vmnet network
	I0408 04:52:14.064927   11248 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:52:14.064973   11248 cni.go:84] Creating CNI manager for ""
	I0408 04:52:14.064982   11248 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:52:14.064986   11248 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 04:52:14.065028   11248 start.go:340] cluster config:
	{Name:embed-certs-967000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:52:14.069871   11248 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:14.076800   11248 out.go:177] * Starting "embed-certs-967000" primary control-plane node in "embed-certs-967000" cluster
	I0408 04:52:14.079750   11248 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:52:14.079794   11248 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:52:14.079802   11248 cache.go:56] Caching tarball of preloaded images
	I0408 04:52:14.079922   11248 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:52:14.079929   11248 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:52:14.079994   11248 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/embed-certs-967000/config.json ...
	I0408 04:52:14.080007   11248 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/embed-certs-967000/config.json: {Name:mk0664dced10cd5a740bf0f7ad0c0fbaf6655e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:52:14.080440   11248 start.go:360] acquireMachinesLock for embed-certs-967000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:52:14.080468   11248 start.go:364] duration metric: took 22.75µs to acquireMachinesLock for "embed-certs-967000"
	I0408 04:52:14.080478   11248 start.go:93] Provisioning new machine with config: &{Name:embed-certs-967000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:embed-certs-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:52:14.080511   11248 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:52:14.084863   11248 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 04:52:14.099855   11248 start.go:159] libmachine.API.Create for "embed-certs-967000" (driver="qemu2")
	I0408 04:52:14.099878   11248 client.go:168] LocalClient.Create starting
	I0408 04:52:14.099936   11248 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:52:14.099965   11248 main.go:141] libmachine: Decoding PEM data...
	I0408 04:52:14.099974   11248 main.go:141] libmachine: Parsing certificate...
	I0408 04:52:14.100006   11248 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:52:14.100045   11248 main.go:141] libmachine: Decoding PEM data...
	I0408 04:52:14.100054   11248 main.go:141] libmachine: Parsing certificate...
	I0408 04:52:14.101307   11248 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:52:14.247008   11248 main.go:141] libmachine: Creating SSH key...
	I0408 04:52:14.298542   11248 main.go:141] libmachine: Creating Disk image...
	I0408 04:52:14.298548   11248 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:52:14.298729   11248 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/disk.qcow2
	I0408 04:52:14.311158   11248 main.go:141] libmachine: STDOUT: 
	I0408 04:52:14.311180   11248 main.go:141] libmachine: STDERR: 
	I0408 04:52:14.311233   11248 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/disk.qcow2 +20000M
	I0408 04:52:14.321963   11248 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:52:14.321980   11248 main.go:141] libmachine: STDERR: 
	I0408 04:52:14.322000   11248 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/disk.qcow2
	I0408 04:52:14.322006   11248 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:52:14.322032   11248 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:9e:e9:84:a1:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/disk.qcow2
	I0408 04:52:14.323848   11248 main.go:141] libmachine: STDOUT: 
	I0408 04:52:14.323862   11248 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:52:14.323881   11248 client.go:171] duration metric: took 224.00025ms to LocalClient.Create
	I0408 04:52:16.326034   11248 start.go:128] duration metric: took 2.245539125s to createHost
	I0408 04:52:16.326162   11248 start.go:83] releasing machines lock for "embed-certs-967000", held for 2.245681791s
	W0408 04:52:16.326223   11248 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:52:16.336350   11248 out.go:177] * Deleting "embed-certs-967000" in qemu2 ...
	W0408 04:52:16.369224   11248 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:52:16.369255   11248 start.go:728] Will try again in 5 seconds ...
	I0408 04:52:21.371448   11248 start.go:360] acquireMachinesLock for embed-certs-967000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:52:21.371938   11248 start.go:364] duration metric: took 361.917µs to acquireMachinesLock for "embed-certs-967000"
	I0408 04:52:21.372069   11248 start.go:93] Provisioning new machine with config: &{Name:embed-certs-967000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:embed-certs-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:52:21.372377   11248 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:52:21.382010   11248 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 04:52:21.433482   11248 start.go:159] libmachine.API.Create for "embed-certs-967000" (driver="qemu2")
	I0408 04:52:21.433544   11248 client.go:168] LocalClient.Create starting
	I0408 04:52:21.433655   11248 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:52:21.433714   11248 main.go:141] libmachine: Decoding PEM data...
	I0408 04:52:21.433730   11248 main.go:141] libmachine: Parsing certificate...
	I0408 04:52:21.433789   11248 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:52:21.433831   11248 main.go:141] libmachine: Decoding PEM data...
	I0408 04:52:21.433873   11248 main.go:141] libmachine: Parsing certificate...
	I0408 04:52:21.434417   11248 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:52:21.591475   11248 main.go:141] libmachine: Creating SSH key...
	I0408 04:52:21.895032   11248 main.go:141] libmachine: Creating Disk image...
	I0408 04:52:21.895041   11248 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:52:21.895308   11248 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/disk.qcow2
	I0408 04:52:21.908588   11248 main.go:141] libmachine: STDOUT: 
	I0408 04:52:21.908610   11248 main.go:141] libmachine: STDERR: 
	I0408 04:52:21.908676   11248 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/disk.qcow2 +20000M
	I0408 04:52:21.919649   11248 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:52:21.919666   11248 main.go:141] libmachine: STDERR: 
	I0408 04:52:21.919674   11248 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/disk.qcow2
	I0408 04:52:21.919677   11248 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:52:21.919715   11248 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:b0:c1:09:c6:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/disk.qcow2
	I0408 04:52:21.921441   11248 main.go:141] libmachine: STDOUT: 
	I0408 04:52:21.921457   11248 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:52:21.921469   11248 client.go:171] duration metric: took 487.928417ms to LocalClient.Create
	I0408 04:52:23.923654   11248 start.go:128] duration metric: took 2.551246125s to createHost
	I0408 04:52:23.923778   11248 start.go:83] releasing machines lock for "embed-certs-967000", held for 2.551812042s
	W0408 04:52:23.924053   11248 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:52:23.933711   11248 out.go:177] 
	W0408 04:52:23.950784   11248 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:52:23.950818   11248 out.go:239] * 
	* 
	W0408 04:52:23.953254   11248 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:52:23.960631   11248 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-967000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-967000 -n embed-certs-967000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-967000 -n embed-certs-967000: exit status 7 (70.423209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-967000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (6.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-272000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-272000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-rc.0: exit status 80 (6.598322083s)

                                                
                                                
-- stdout --
	* [no-preload-272000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-272000" primary control-plane node in "no-preload-272000" cluster
	* Restarting existing qemu2 VM for "no-preload-272000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-272000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:52:17.439666   11284 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:52:17.439805   11284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:17.439808   11284 out.go:304] Setting ErrFile to fd 2...
	I0408 04:52:17.439811   11284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:17.439922   11284 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:52:17.440943   11284 out.go:298] Setting JSON to false
	I0408 04:52:17.457062   11284 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6706,"bootTime":1712570431,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:52:17.457142   11284 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:52:17.462071   11284 out.go:177] * [no-preload-272000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:52:17.472038   11284 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:52:17.476022   11284 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:52:17.472087   11284 notify.go:220] Checking for updates...
	I0408 04:52:17.481036   11284 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:52:17.484043   11284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:52:17.487057   11284 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:52:17.490018   11284 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:52:17.493355   11284 config.go:182] Loaded profile config "no-preload-272000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-rc.0
	I0408 04:52:17.493615   11284 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:52:17.498068   11284 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 04:52:17.505025   11284 start.go:297] selected driver: qemu2
	I0408 04:52:17.505032   11284 start.go:901] validating driver "qemu2" against &{Name:no-preload-272000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-272000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:52:17.505102   11284 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:52:17.507428   11284 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:52:17.507480   11284 cni.go:84] Creating CNI manager for ""
	I0408 04:52:17.507487   11284 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:52:17.507512   11284 start.go:340] cluster config:
	{Name:no-preload-272000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-272000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:52:17.511819   11284 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:17.518977   11284 out.go:177] * Starting "no-preload-272000" primary control-plane node in "no-preload-272000" cluster
	I0408 04:52:17.523043   11284 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime docker
	I0408 04:52:17.523107   11284 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/no-preload-272000/config.json ...
	I0408 04:52:17.523146   11284 cache.go:107] acquiring lock: {Name:mk5f2f2ca0de4b8bf5c9307e03a1b5b0cb505523 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:17.523154   11284 cache.go:107] acquiring lock: {Name:mk9877ffaea1c1634c0e03efe73e1284d9ba32bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:17.523170   11284 cache.go:107] acquiring lock: {Name:mk50d25e89241d05349defeddf842da380e65bbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:17.523198   11284 cache.go:115] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0 exists
	I0408 04:52:17.523205   11284 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0" took 61.584µs
	I0408 04:52:17.523213   11284 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0-rc.0 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0 succeeded
	I0408 04:52:17.523214   11284 cache.go:107] acquiring lock: {Name:mk67bb028e9dd4b9f5ee36e8e77536422614995d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:17.523220   11284 cache.go:115] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0408 04:52:17.523228   11284 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 76.375µs
	I0408 04:52:17.523245   11284 cache.go:115] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0 exists
	I0408 04:52:17.523251   11284 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0" took 96µs
	I0408 04:52:17.523254   11284 cache.go:115] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0408 04:52:17.523247   11284 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0408 04:52:17.523254   11284 cache.go:107] acquiring lock: {Name:mkc9e05e970544330f783b0aedf828edfa735e22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:17.523258   11284 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 45.667µs
	I0408 04:52:17.523265   11284 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0408 04:52:17.523262   11284 cache.go:107] acquiring lock: {Name:mkab01c3059f4b440c9db81765c0898da649931e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:17.523298   11284 cache.go:115] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-rc.0 exists
	I0408 04:52:17.523304   11284 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0-rc.0" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-rc.0" took 51.542µs
	I0408 04:52:17.523306   11284 cache.go:115] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0408 04:52:17.523308   11284 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0-rc.0 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-rc.0 succeeded
	I0408 04:52:17.523255   11284 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0-rc.0 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0 succeeded
	I0408 04:52:17.523310   11284 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 49.375µs
	I0408 04:52:17.523314   11284 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0408 04:52:17.523355   11284 cache.go:107] acquiring lock: {Name:mk0aee2f1a22898f55fc982413b2a783b2bd87c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:17.523357   11284 cache.go:107] acquiring lock: {Name:mkb14e720760971e2684cb2fc3878dd2588a068e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:17.523418   11284 cache.go:115] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0408 04:52:17.523421   11284 cache.go:115] /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0 exists
	I0408 04:52:17.523429   11284 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0" took 139.166µs
	I0408 04:52:17.523437   11284 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0-rc.0 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0 succeeded
	I0408 04:52:17.523423   11284 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 131.583µs
	I0408 04:52:17.523444   11284 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0408 04:52:17.523452   11284 cache.go:87] Successfully saved all images to host disk.
	I0408 04:52:17.523608   11284 start.go:360] acquireMachinesLock for no-preload-272000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:52:17.523646   11284 start.go:364] duration metric: took 31.916µs to acquireMachinesLock for "no-preload-272000"
	I0408 04:52:17.523655   11284 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:52:17.523660   11284 fix.go:54] fixHost starting: 
	I0408 04:52:17.523776   11284 fix.go:112] recreateIfNeeded on no-preload-272000: state=Stopped err=<nil>
	W0408 04:52:17.523788   11284 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:52:17.532051   11284 out.go:177] * Restarting existing qemu2 VM for "no-preload-272000" ...
	I0408 04:52:17.536060   11284 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:c9:6a:7b:08:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/disk.qcow2
	I0408 04:52:17.538198   11284 main.go:141] libmachine: STDOUT: 
	I0408 04:52:17.538230   11284 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:52:17.538258   11284 fix.go:56] duration metric: took 14.597583ms for fixHost
	I0408 04:52:17.538262   11284 start.go:83] releasing machines lock for "no-preload-272000", held for 14.611834ms
	W0408 04:52:17.538269   11284 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:52:17.538297   11284 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:52:17.538302   11284 start.go:728] Will try again in 5 seconds ...
	I0408 04:52:22.540410   11284 start.go:360] acquireMachinesLock for no-preload-272000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:52:23.923954   11284 start.go:364] duration metric: took 1.383442834s to acquireMachinesLock for "no-preload-272000"
	I0408 04:52:23.924133   11284 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:52:23.924150   11284 fix.go:54] fixHost starting: 
	I0408 04:52:23.924850   11284 fix.go:112] recreateIfNeeded on no-preload-272000: state=Stopped err=<nil>
	W0408 04:52:23.924877   11284 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:52:23.946687   11284 out.go:177] * Restarting existing qemu2 VM for "no-preload-272000" ...
	I0408 04:52:23.953895   11284 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:c9:6a:7b:08:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/no-preload-272000/disk.qcow2
	I0408 04:52:23.963442   11284 main.go:141] libmachine: STDOUT: 
	I0408 04:52:23.963510   11284 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:52:23.963604   11284 fix.go:56] duration metric: took 39.452542ms for fixHost
	I0408 04:52:23.963658   11284 start.go:83] releasing machines lock for "no-preload-272000", held for 39.618042ms
	W0408 04:52:23.963850   11284 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-272000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-272000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:52:23.973119   11284 out.go:177] 
	W0408 04:52:23.983906   11284 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:52:23.984041   11284 out.go:239] * 
	* 
	W0408 04:52:23.986913   11284 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:52:23.999633   11284 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-272000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000: exit status 7 (53.133916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-272000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (6.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-967000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-967000 create -f testdata/busybox.yaml: exit status 1 (31.029917ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-967000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-967000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-967000 -n embed-certs-967000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-967000 -n embed-certs-967000: exit status 7 (33.135791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-967000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-967000 -n embed-certs-967000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-967000 -n embed-certs-967000: exit status 7 (35.939084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-967000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-272000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000: exit status 7 (35.558958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-272000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-272000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-272000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-272000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.089958ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-272000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-272000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000: exit status 7 (33.393375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-272000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-967000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-967000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-967000 describe deploy/metrics-server -n kube-system: exit status 1 (29.379292ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-967000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-967000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-967000 -n embed-certs-967000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-967000 -n embed-certs-967000: exit status 7 (34.307625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-967000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-272000 image list --format=json
start_stop_delete_test.go:304: v1.30.0-rc.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0-rc.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000: exit status 7 (33.987166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-272000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-272000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-272000 --alsologtostderr -v=1: exit status 83 (45.99875ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-272000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-272000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:52:24.282813   11318 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:52:24.282968   11318 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:24.282972   11318 out.go:304] Setting ErrFile to fd 2...
	I0408 04:52:24.282974   11318 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:24.283107   11318 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:52:24.283325   11318 out.go:298] Setting JSON to false
	I0408 04:52:24.283335   11318 mustload.go:65] Loading cluster: no-preload-272000
	I0408 04:52:24.283530   11318 config.go:182] Loaded profile config "no-preload-272000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-rc.0
	I0408 04:52:24.286714   11318 out.go:177] * The control-plane node no-preload-272000 host is not running: state=Stopped
	I0408 04:52:24.290674   11318 out.go:177]   To start a cluster, run: "minikube start -p no-preload-272000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-272000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000: exit status 7 (30.604958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-272000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000: exit status 7 (30.698333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-272000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-730000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-730000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (9.994325667s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-730000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-730000" primary control-plane node in "default-k8s-diff-port-730000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-730000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:52:24.984806   11362 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:52:24.984955   11362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:24.984958   11362 out.go:304] Setting ErrFile to fd 2...
	I0408 04:52:24.984961   11362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:24.985077   11362 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:52:24.986188   11362 out.go:298] Setting JSON to false
	I0408 04:52:25.002667   11362 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6713,"bootTime":1712570431,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:52:25.002726   11362 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:52:25.007786   11362 out.go:177] * [default-k8s-diff-port-730000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:52:25.015758   11362 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:52:25.018734   11362 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:52:25.015822   11362 notify.go:220] Checking for updates...
	I0408 04:52:25.022672   11362 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:52:25.025693   11362 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:52:25.028681   11362 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:52:25.031756   11362 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:52:25.035060   11362 config.go:182] Loaded profile config "embed-certs-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:52:25.035120   11362 config.go:182] Loaded profile config "multinode-464000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:52:25.035176   11362 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:52:25.039673   11362 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:52:25.046668   11362 start.go:297] selected driver: qemu2
	I0408 04:52:25.046675   11362 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:52:25.046681   11362 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:52:25.049029   11362 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:52:25.052476   11362 out.go:177] * Automatically selected the socket_vmnet network
	I0408 04:52:25.055689   11362 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:52:25.055721   11362 cni.go:84] Creating CNI manager for ""
	I0408 04:52:25.055728   11362 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:52:25.055732   11362 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 04:52:25.055762   11362 start.go:340] cluster config:
	{Name:default-k8s-diff-port-730000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-730000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:52:25.060245   11362 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:25.067686   11362 out.go:177] * Starting "default-k8s-diff-port-730000" primary control-plane node in "default-k8s-diff-port-730000" cluster
	I0408 04:52:25.071704   11362 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:52:25.071729   11362 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:52:25.071739   11362 cache.go:56] Caching tarball of preloaded images
	I0408 04:52:25.071799   11362 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:52:25.071805   11362 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:52:25.071876   11362 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/default-k8s-diff-port-730000/config.json ...
	I0408 04:52:25.071889   11362 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/default-k8s-diff-port-730000/config.json: {Name:mk3a360c5d6ac5cdb4a0ed1ba8d981c4ad9a2c45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:52:25.072119   11362 start.go:360] acquireMachinesLock for default-k8s-diff-port-730000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:52:25.072153   11362 start.go:364] duration metric: took 25.042µs to acquireMachinesLock for "default-k8s-diff-port-730000"
	I0408 04:52:25.072164   11362 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-730000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-730000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:52:25.072205   11362 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:52:25.081627   11362 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 04:52:25.098880   11362 start.go:159] libmachine.API.Create for "default-k8s-diff-port-730000" (driver="qemu2")
	I0408 04:52:25.098910   11362 client.go:168] LocalClient.Create starting
	I0408 04:52:25.098972   11362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:52:25.099003   11362 main.go:141] libmachine: Decoding PEM data...
	I0408 04:52:25.099016   11362 main.go:141] libmachine: Parsing certificate...
	I0408 04:52:25.099055   11362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:52:25.099077   11362 main.go:141] libmachine: Decoding PEM data...
	I0408 04:52:25.099086   11362 main.go:141] libmachine: Parsing certificate...
	I0408 04:52:25.099567   11362 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:52:25.247542   11362 main.go:141] libmachine: Creating SSH key...
	I0408 04:52:25.357287   11362 main.go:141] libmachine: Creating Disk image...
	I0408 04:52:25.357312   11362 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:52:25.357475   11362 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2
	I0408 04:52:25.369938   11362 main.go:141] libmachine: STDOUT: 
	I0408 04:52:25.369959   11362 main.go:141] libmachine: STDERR: 
	I0408 04:52:25.370008   11362 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2 +20000M
	I0408 04:52:25.380607   11362 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:52:25.380634   11362 main.go:141] libmachine: STDERR: 
	I0408 04:52:25.380650   11362 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2
	I0408 04:52:25.380654   11362 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:52:25.380700   11362 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:6a:d4:c1:b7:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2
	I0408 04:52:25.382367   11362 main.go:141] libmachine: STDOUT: 
	I0408 04:52:25.382383   11362 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:52:25.382404   11362 client.go:171] duration metric: took 283.492875ms to LocalClient.Create
	I0408 04:52:27.384562   11362 start.go:128] duration metric: took 2.312374666s to createHost
	I0408 04:52:27.384624   11362 start.go:83] releasing machines lock for "default-k8s-diff-port-730000", held for 2.312501792s
	W0408 04:52:27.384703   11362 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:52:27.401225   11362 out.go:177] * Deleting "default-k8s-diff-port-730000" in qemu2 ...
	W0408 04:52:27.428770   11362 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:52:27.428797   11362 start.go:728] Will try again in 5 seconds ...
	I0408 04:52:32.430902   11362 start.go:360] acquireMachinesLock for default-k8s-diff-port-730000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:52:32.431278   11362 start.go:364] duration metric: took 305.958µs to acquireMachinesLock for "default-k8s-diff-port-730000"
	I0408 04:52:32.431385   11362 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-730000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-730000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:52:32.431648   11362 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:52:32.441287   11362 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 04:52:32.490040   11362 start.go:159] libmachine.API.Create for "default-k8s-diff-port-730000" (driver="qemu2")
	I0408 04:52:32.490101   11362 client.go:168] LocalClient.Create starting
	I0408 04:52:32.490227   11362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:52:32.490292   11362 main.go:141] libmachine: Decoding PEM data...
	I0408 04:52:32.490309   11362 main.go:141] libmachine: Parsing certificate...
	I0408 04:52:32.490382   11362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:52:32.490426   11362 main.go:141] libmachine: Decoding PEM data...
	I0408 04:52:32.490438   11362 main.go:141] libmachine: Parsing certificate...
	I0408 04:52:32.490974   11362 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:52:32.646005   11362 main.go:141] libmachine: Creating SSH key...
	I0408 04:52:32.860719   11362 main.go:141] libmachine: Creating Disk image...
	I0408 04:52:32.860732   11362 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:52:32.860913   11362 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2
	I0408 04:52:32.873531   11362 main.go:141] libmachine: STDOUT: 
	I0408 04:52:32.873552   11362 main.go:141] libmachine: STDERR: 
	I0408 04:52:32.873600   11362 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2 +20000M
	I0408 04:52:32.884212   11362 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:52:32.884237   11362 main.go:141] libmachine: STDERR: 
	I0408 04:52:32.884253   11362 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2
	I0408 04:52:32.884264   11362 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:52:32.884302   11362 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:dd:28:51:c5:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2
	I0408 04:52:32.885996   11362 main.go:141] libmachine: STDOUT: 
	I0408 04:52:32.886012   11362 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:52:32.886027   11362 client.go:171] duration metric: took 395.927875ms to LocalClient.Create
	I0408 04:52:34.888175   11362 start.go:128] duration metric: took 2.45652275s to createHost
	I0408 04:52:34.888235   11362 start.go:83] releasing machines lock for "default-k8s-diff-port-730000", held for 2.456979s
	W0408 04:52:34.888658   11362 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-730000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-730000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:52:34.907258   11362 out.go:177] 
	W0408 04:52:34.914290   11362 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:52:34.914334   11362 out.go:239] * 
	* 
	W0408 04:52:34.917212   11362 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:52:34.930193   11362 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-730000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000: exit status 7 (65.328209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-967000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-967000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (6.934145625s)

                                                
                                                
-- stdout --
	* [embed-certs-967000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-967000" primary control-plane node in "embed-certs-967000" cluster
	* Restarting existing qemu2 VM for "embed-certs-967000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-967000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:52:28.062511   11388 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:52:28.062613   11388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:28.062616   11388 out.go:304] Setting ErrFile to fd 2...
	I0408 04:52:28.062618   11388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:28.062748   11388 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:52:28.063750   11388 out.go:298] Setting JSON to false
	I0408 04:52:28.079881   11388 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6717,"bootTime":1712570431,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:52:28.079938   11388 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:52:28.084812   11388 out.go:177] * [embed-certs-967000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:52:28.093826   11388 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:52:28.093859   11388 notify.go:220] Checking for updates...
	I0408 04:52:28.097858   11388 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:52:28.100840   11388 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:52:28.103834   11388 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:52:28.106813   11388 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:52:28.109857   11388 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:52:28.113108   11388 config.go:182] Loaded profile config "embed-certs-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:52:28.113358   11388 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:52:28.117730   11388 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 04:52:28.124855   11388 start.go:297] selected driver: qemu2
	I0408 04:52:28.124863   11388 start.go:901] validating driver "qemu2" against &{Name:embed-certs-967000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:embed-certs-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:52:28.124920   11388 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:52:28.127312   11388 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:52:28.127367   11388 cni.go:84] Creating CNI manager for ""
	I0408 04:52:28.127374   11388 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:52:28.127405   11388 start.go:340] cluster config:
	{Name:embed-certs-967000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-967000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:52:28.131765   11388 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:28.138833   11388 out.go:177] * Starting "embed-certs-967000" primary control-plane node in "embed-certs-967000" cluster
	I0408 04:52:28.142698   11388 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:52:28.142715   11388 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:52:28.142726   11388 cache.go:56] Caching tarball of preloaded images
	I0408 04:52:28.142788   11388 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:52:28.142793   11388 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:52:28.142857   11388 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/embed-certs-967000/config.json ...
	I0408 04:52:28.143491   11388 start.go:360] acquireMachinesLock for embed-certs-967000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:52:28.143523   11388 start.go:364] duration metric: took 25.459µs to acquireMachinesLock for "embed-certs-967000"
	I0408 04:52:28.143531   11388 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:52:28.143536   11388 fix.go:54] fixHost starting: 
	I0408 04:52:28.143642   11388 fix.go:112] recreateIfNeeded on embed-certs-967000: state=Stopped err=<nil>
	W0408 04:52:28.143649   11388 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:52:28.151811   11388 out.go:177] * Restarting existing qemu2 VM for "embed-certs-967000" ...
	I0408 04:52:28.155804   11388 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:b0:c1:09:c6:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/disk.qcow2
	I0408 04:52:28.157824   11388 main.go:141] libmachine: STDOUT: 
	I0408 04:52:28.157841   11388 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:52:28.157867   11388 fix.go:56] duration metric: took 14.330958ms for fixHost
	I0408 04:52:28.157871   11388 start.go:83] releasing machines lock for "embed-certs-967000", held for 14.344959ms
	W0408 04:52:28.157878   11388 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:52:28.157918   11388 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:52:28.157922   11388 start.go:728] Will try again in 5 seconds ...
	I0408 04:52:33.158760   11388 start.go:360] acquireMachinesLock for embed-certs-967000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:52:34.888445   11388 start.go:364] duration metric: took 1.729597417s to acquireMachinesLock for "embed-certs-967000"
	I0408 04:52:34.888654   11388 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:52:34.888674   11388 fix.go:54] fixHost starting: 
	I0408 04:52:34.889393   11388 fix.go:112] recreateIfNeeded on embed-certs-967000: state=Stopped err=<nil>
	W0408 04:52:34.889424   11388 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:52:34.911264   11388 out.go:177] * Restarting existing qemu2 VM for "embed-certs-967000" ...
	I0408 04:52:34.917353   11388 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:b0:c1:09:c6:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/embed-certs-967000/disk.qcow2
	I0408 04:52:34.926811   11388 main.go:141] libmachine: STDOUT: 
	I0408 04:52:34.926872   11388 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:52:34.926991   11388 fix.go:56] duration metric: took 38.314291ms for fixHost
	I0408 04:52:34.927005   11388 start.go:83] releasing machines lock for "embed-certs-967000", held for 38.503792ms
	W0408 04:52:34.927211   11388 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-967000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-967000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:52:34.942182   11388 out.go:177] 
	W0408 04:52:34.946219   11388 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:52:34.946270   11388 out.go:239] * 
	* 
	W0408 04:52:34.948488   11388 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:52:34.957208   11388 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-967000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-967000 -n embed-certs-967000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-967000 -n embed-certs-967000: exit status 7 (53.029542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-967000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-730000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-730000 create -f testdata/busybox.yaml: exit status 1 (31.160292ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-730000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-730000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000: exit status 7 (32.756333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-730000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000: exit status 7 (36.089584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-967000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-967000 -n embed-certs-967000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-967000 -n embed-certs-967000: exit status 7 (35.443084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-967000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-967000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-967000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-967000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.548917ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-967000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-967000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-967000 -n embed-certs-967000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-967000 -n embed-certs-967000: exit status 7 (33.575333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-967000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-730000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-730000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-730000 describe deploy/metrics-server -n kube-system: exit status 1 (28.466208ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-730000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-730000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000: exit status 7 (39.03875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-967000 image list --format=json
start_stop_delete_test.go:304: v1.29.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.3",
- 	"registry.k8s.io/kube-controller-manager:v1.29.3",
- 	"registry.k8s.io/kube-proxy:v1.29.3",
- 	"registry.k8s.io/kube-scheduler:v1.29.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-967000 -n embed-certs-967000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-967000 -n embed-certs-967000: exit status 7 (32.911625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-967000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-967000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-967000 --alsologtostderr -v=1: exit status 83 (50.165458ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-967000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-967000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:52:35.240309   11421 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:52:35.240440   11421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:35.240444   11421 out.go:304] Setting ErrFile to fd 2...
	I0408 04:52:35.240447   11421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:35.240615   11421 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:52:35.240846   11421 out.go:298] Setting JSON to false
	I0408 04:52:35.240858   11421 mustload.go:65] Loading cluster: embed-certs-967000
	I0408 04:52:35.241045   11421 config.go:182] Loaded profile config "embed-certs-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:52:35.244783   11421 out.go:177] * The control-plane node embed-certs-967000 host is not running: state=Stopped
	I0408 04:52:35.252668   11421 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-967000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-967000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-967000 -n embed-certs-967000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-967000 -n embed-certs-967000: exit status 7 (35.109958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-967000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-967000 -n embed-certs-967000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-967000 -n embed-certs-967000: exit status 7 (30.008ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-967000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-070000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-rc.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-070000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-rc.0: exit status 80 (9.819432209s)

                                                
                                                
-- stdout --
	* [newest-cni-070000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-070000" primary control-plane node in "newest-cni-070000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-070000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:52:35.715560   11451 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:52:35.715700   11451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:35.715703   11451 out.go:304] Setting ErrFile to fd 2...
	I0408 04:52:35.715706   11451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:35.715835   11451 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:52:35.716987   11451 out.go:298] Setting JSON to false
	I0408 04:52:35.733477   11451 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6724,"bootTime":1712570431,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:52:35.733541   11451 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:52:35.739020   11451 out.go:177] * [newest-cni-070000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:52:35.747086   11451 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:52:35.751946   11451 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:52:35.747108   11451 notify.go:220] Checking for updates...
	I0408 04:52:35.758036   11451 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:52:35.761023   11451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:52:35.764033   11451 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:52:35.767093   11451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:52:35.770400   11451 config.go:182] Loaded profile config "default-k8s-diff-port-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:52:35.770460   11451 config.go:182] Loaded profile config "multinode-464000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:52:35.770524   11451 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:52:35.774995   11451 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 04:52:35.780902   11451 start.go:297] selected driver: qemu2
	I0408 04:52:35.780909   11451 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:52:35.780915   11451 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:52:35.783280   11451 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0408 04:52:35.783304   11451 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0408 04:52:35.792005   11451 out.go:177] * Automatically selected the socket_vmnet network
	I0408 04:52:35.795060   11451 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0408 04:52:35.795124   11451 cni.go:84] Creating CNI manager for ""
	I0408 04:52:35.795132   11451 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:52:35.795137   11451 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 04:52:35.795184   11451 start.go:340] cluster config:
	{Name:newest-cni-070000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.0 ClusterName:newest-cni-070000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:52:35.800048   11451 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:35.806990   11451 out.go:177] * Starting "newest-cni-070000" primary control-plane node in "newest-cni-070000" cluster
	I0408 04:52:35.810999   11451 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime docker
	I0408 04:52:35.811017   11451 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0408 04:52:35.811026   11451 cache.go:56] Caching tarball of preloaded images
	I0408 04:52:35.811083   11451 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:52:35.811095   11451 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.0 on docker
	I0408 04:52:35.811171   11451 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/newest-cni-070000/config.json ...
	I0408 04:52:35.811184   11451 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/newest-cni-070000/config.json: {Name:mk30b77a4e3511435d0a454255da8b6cbfe8a472 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:52:35.811443   11451 start.go:360] acquireMachinesLock for newest-cni-070000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:52:35.811476   11451 start.go:364] duration metric: took 27µs to acquireMachinesLock for "newest-cni-070000"
	I0408 04:52:35.811488   11451 start.go:93] Provisioning new machine with config: &{Name:newest-cni-070000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-rc.0 ClusterName:newest-cni-070000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:52:35.811547   11451 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:52:35.820014   11451 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 04:52:35.837858   11451 start.go:159] libmachine.API.Create for "newest-cni-070000" (driver="qemu2")
	I0408 04:52:35.837888   11451 client.go:168] LocalClient.Create starting
	I0408 04:52:35.837955   11451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:52:35.837984   11451 main.go:141] libmachine: Decoding PEM data...
	I0408 04:52:35.837992   11451 main.go:141] libmachine: Parsing certificate...
	I0408 04:52:35.838033   11451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:52:35.838063   11451 main.go:141] libmachine: Decoding PEM data...
	I0408 04:52:35.838069   11451 main.go:141] libmachine: Parsing certificate...
	I0408 04:52:35.838464   11451 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:52:35.984877   11451 main.go:141] libmachine: Creating SSH key...
	I0408 04:52:36.049187   11451 main.go:141] libmachine: Creating Disk image...
	I0408 04:52:36.049192   11451 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:52:36.049364   11451 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/disk.qcow2
	I0408 04:52:36.061526   11451 main.go:141] libmachine: STDOUT: 
	I0408 04:52:36.061556   11451 main.go:141] libmachine: STDERR: 
	I0408 04:52:36.061602   11451 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/disk.qcow2 +20000M
	I0408 04:52:36.072307   11451 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:52:36.072324   11451 main.go:141] libmachine: STDERR: 
	I0408 04:52:36.072344   11451 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/disk.qcow2
	I0408 04:52:36.072350   11451 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:52:36.072376   11451 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:ba:10:ed:e4:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/disk.qcow2
	I0408 04:52:36.074005   11451 main.go:141] libmachine: STDOUT: 
	I0408 04:52:36.074020   11451 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:52:36.074039   11451 client.go:171] duration metric: took 236.149ms to LocalClient.Create
	I0408 04:52:38.076206   11451 start.go:128] duration metric: took 2.264669334s to createHost
	I0408 04:52:38.076264   11451 start.go:83] releasing machines lock for "newest-cni-070000", held for 2.264816333s
	W0408 04:52:38.076347   11451 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:52:38.087467   11451 out.go:177] * Deleting "newest-cni-070000" in qemu2 ...
	W0408 04:52:38.124271   11451 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:52:38.124332   11451 start.go:728] Will try again in 5 seconds ...
	I0408 04:52:43.125903   11451 start.go:360] acquireMachinesLock for newest-cni-070000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:52:43.126286   11451 start.go:364] duration metric: took 291.667µs to acquireMachinesLock for "newest-cni-070000"
	I0408 04:52:43.126417   11451 start.go:93] Provisioning new machine with config: &{Name:newest-cni-070000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-rc.0 ClusterName:newest-cni-070000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 04:52:43.126759   11451 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 04:52:43.136398   11451 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 04:52:43.185434   11451 start.go:159] libmachine.API.Create for "newest-cni-070000" (driver="qemu2")
	I0408 04:52:43.185489   11451 client.go:168] LocalClient.Create starting
	I0408 04:52:43.185602   11451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/ca.pem
	I0408 04:52:43.185664   11451 main.go:141] libmachine: Decoding PEM data...
	I0408 04:52:43.185679   11451 main.go:141] libmachine: Parsing certificate...
	I0408 04:52:43.185743   11451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18588-7343/.minikube/certs/cert.pem
	I0408 04:52:43.185785   11451 main.go:141] libmachine: Decoding PEM data...
	I0408 04:52:43.185800   11451 main.go:141] libmachine: Parsing certificate...
	I0408 04:52:43.186531   11451 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso...
	I0408 04:52:43.342248   11451 main.go:141] libmachine: Creating SSH key...
	I0408 04:52:43.413919   11451 main.go:141] libmachine: Creating Disk image...
	I0408 04:52:43.413930   11451 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 04:52:43.414117   11451 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/disk.qcow2.raw /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/disk.qcow2
	I0408 04:52:43.426456   11451 main.go:141] libmachine: STDOUT: 
	I0408 04:52:43.426480   11451 main.go:141] libmachine: STDERR: 
	I0408 04:52:43.426539   11451 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/disk.qcow2 +20000M
	I0408 04:52:43.437061   11451 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 04:52:43.437084   11451 main.go:141] libmachine: STDERR: 
	I0408 04:52:43.437095   11451 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/disk.qcow2
	I0408 04:52:43.437103   11451 main.go:141] libmachine: Starting QEMU VM...
	I0408 04:52:43.437145   11451 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:04:7a:22:c8:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/disk.qcow2
	I0408 04:52:43.438783   11451 main.go:141] libmachine: STDOUT: 
	I0408 04:52:43.438799   11451 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:52:43.438812   11451 client.go:171] duration metric: took 253.322834ms to LocalClient.Create
	I0408 04:52:45.440943   11451 start.go:128] duration metric: took 2.3141945s to createHost
	I0408 04:52:45.441002   11451 start.go:83] releasing machines lock for "newest-cni-070000", held for 2.314728875s
	W0408 04:52:45.441340   11451 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-070000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-070000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:52:45.458969   11451 out.go:177] 
	W0408 04:52:45.464916   11451 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:52:45.464941   11451 out.go:239] * 
	* 
	W0408 04:52:45.467519   11451 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:52:45.480944   11451 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-070000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-070000 -n newest-cni-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-070000 -n newest-cni-070000: exit status 7 (67.699708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-070000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-730000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-730000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (6.529627667s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-730000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-730000" primary control-plane node in "default-k8s-diff-port-730000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-730000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-730000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:52:39.021974   11478 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:52:39.022102   11478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:39.022106   11478 out.go:304] Setting ErrFile to fd 2...
	I0408 04:52:39.022108   11478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:39.022246   11478 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:52:39.023262   11478 out.go:298] Setting JSON to false
	I0408 04:52:39.039523   11478 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6728,"bootTime":1712570431,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:52:39.039579   11478 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:52:39.042148   11478 out.go:177] * [default-k8s-diff-port-730000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:52:39.049719   11478 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:52:39.049788   11478 notify.go:220] Checking for updates...
	I0408 04:52:39.055615   11478 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:52:39.058615   11478 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:52:39.060259   11478 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:52:39.063632   11478 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:52:39.066642   11478 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:52:39.069971   11478 config.go:182] Loaded profile config "default-k8s-diff-port-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:52:39.070235   11478 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:52:39.074614   11478 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 04:52:39.081676   11478 start.go:297] selected driver: qemu2
	I0408 04:52:39.081683   11478 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-730000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-730000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:52:39.081731   11478 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:52:39.084041   11478 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 04:52:39.084086   11478 cni.go:84] Creating CNI manager for ""
	I0408 04:52:39.084093   11478 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:52:39.084127   11478 start.go:340] cluster config:
	{Name:default-k8s-diff-port-730000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-730000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:52:39.088476   11478 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:39.095671   11478 out.go:177] * Starting "default-k8s-diff-port-730000" primary control-plane node in "default-k8s-diff-port-730000" cluster
	I0408 04:52:39.100602   11478 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:52:39.100621   11478 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:52:39.100634   11478 cache.go:56] Caching tarball of preloaded images
	I0408 04:52:39.100685   11478 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:52:39.100690   11478 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:52:39.100752   11478 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/default-k8s-diff-port-730000/config.json ...
	I0408 04:52:39.101390   11478 start.go:360] acquireMachinesLock for default-k8s-diff-port-730000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:52:39.101421   11478 start.go:364] duration metric: took 25.042µs to acquireMachinesLock for "default-k8s-diff-port-730000"
	I0408 04:52:39.101430   11478 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:52:39.101434   11478 fix.go:54] fixHost starting: 
	I0408 04:52:39.101548   11478 fix.go:112] recreateIfNeeded on default-k8s-diff-port-730000: state=Stopped err=<nil>
	W0408 04:52:39.101556   11478 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:52:39.105648   11478 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-730000" ...
	I0408 04:52:39.113612   11478 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:dd:28:51:c5:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2
	I0408 04:52:39.115655   11478 main.go:141] libmachine: STDOUT: 
	I0408 04:52:39.115676   11478 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:52:39.115706   11478 fix.go:56] duration metric: took 14.271542ms for fixHost
	I0408 04:52:39.115709   11478 start.go:83] releasing machines lock for "default-k8s-diff-port-730000", held for 14.28425ms
	W0408 04:52:39.115716   11478 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:52:39.115754   11478 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:52:39.115759   11478 start.go:728] Will try again in 5 seconds ...
	I0408 04:52:44.117845   11478 start.go:360] acquireMachinesLock for default-k8s-diff-port-730000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:52:45.441180   11478 start.go:364] duration metric: took 1.3232575s to acquireMachinesLock for "default-k8s-diff-port-730000"
	I0408 04:52:45.441351   11478 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:52:45.441368   11478 fix.go:54] fixHost starting: 
	I0408 04:52:45.442170   11478 fix.go:112] recreateIfNeeded on default-k8s-diff-port-730000: state=Stopped err=<nil>
	W0408 04:52:45.442199   11478 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:52:45.461981   11478 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-730000" ...
	I0408 04:52:45.468059   11478 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:dd:28:51:c5:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2
	I0408 04:52:45.477074   11478 main.go:141] libmachine: STDOUT: 
	I0408 04:52:45.477137   11478 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:52:45.477228   11478 fix.go:56] duration metric: took 35.8615ms for fixHost
	I0408 04:52:45.477243   11478 start.go:83] releasing machines lock for "default-k8s-diff-port-730000", held for 36.025292ms
	W0408 04:52:45.477479   11478 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-730000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-730000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:52:45.492886   11478 out.go:177] 
	W0408 04:52:45.496964   11478 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:52:45.496995   11478 out.go:239] * 
	* 
	W0408 04:52:45.499006   11478 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:52:45.510125   11478 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-730000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000: exit status 7 (57.043333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-730000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000: exit status 7 (40.712708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-730000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-730000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-730000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.300292ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-730000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-730000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000: exit status 7 (40.116541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-730000 image list --format=json
start_stop_delete_test.go:304: v1.29.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.3",
- 	"registry.k8s.io/kube-controller-manager:v1.29.3",
- 	"registry.k8s.io/kube-proxy:v1.29.3",
- 	"registry.k8s.io/kube-scheduler:v1.29.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000: exit status 7 (30.968833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-730000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-730000 --alsologtostderr -v=1: exit status 83 (42.802ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-730000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-730000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:52:45.788858   11515 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:52:45.789020   11515 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:45.789026   11515 out.go:304] Setting ErrFile to fd 2...
	I0408 04:52:45.789029   11515 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:45.789149   11515 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:52:45.789367   11515 out.go:298] Setting JSON to false
	I0408 04:52:45.789378   11515 mustload.go:65] Loading cluster: default-k8s-diff-port-730000
	I0408 04:52:45.789601   11515 config.go:182] Loaded profile config "default-k8s-diff-port-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:52:45.793468   11515 out.go:177] * The control-plane node default-k8s-diff-port-730000 host is not running: state=Stopped
	I0408 04:52:45.796292   11515 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-730000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-730000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000: exit status 7 (31.23725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-730000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000: exit status 7 (31.229958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-070000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-070000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-rc.0: exit status 80 (5.190797s)

                                                
                                                
-- stdout --
	* [newest-cni-070000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-070000" primary control-plane node in "newest-cni-070000" cluster
	* Restarting existing qemu2 VM for "newest-cni-070000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-070000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:52:49.248438   11556 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:52:49.248561   11556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:49.248564   11556 out.go:304] Setting ErrFile to fd 2...
	I0408 04:52:49.248567   11556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:49.248706   11556 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:52:49.249718   11556 out.go:298] Setting JSON to false
	I0408 04:52:49.265730   11556 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6738,"bootTime":1712570431,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:52:49.265792   11556 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:52:49.270825   11556 out.go:177] * [newest-cni-070000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:52:49.277837   11556 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:52:49.277888   11556 notify.go:220] Checking for updates...
	I0408 04:52:49.282751   11556 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:52:49.285790   11556 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:52:49.288822   11556 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:52:49.291831   11556 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:52:49.294817   11556 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:52:49.298138   11556 config.go:182] Loaded profile config "newest-cni-070000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-rc.0
	I0408 04:52:49.298425   11556 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:52:49.302808   11556 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 04:52:49.309809   11556 start.go:297] selected driver: qemu2
	I0408 04:52:49.309815   11556 start.go:901] validating driver "qemu2" against &{Name:newest-cni-070000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0-rc.0 ClusterName:newest-cni-070000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:52:49.309864   11556 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:52:49.312274   11556 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0408 04:52:49.312314   11556 cni.go:84] Creating CNI manager for ""
	I0408 04:52:49.312322   11556 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:52:49.312366   11556 start.go:340] cluster config:
	{Name:newest-cni-070000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.0 ClusterName:newest-cni-070000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:52:49.316680   11556 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:52:49.324797   11556 out.go:177] * Starting "newest-cni-070000" primary control-plane node in "newest-cni-070000" cluster
	I0408 04:52:49.328698   11556 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime docker
	I0408 04:52:49.328710   11556 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0408 04:52:49.328718   11556 cache.go:56] Caching tarball of preloaded images
	I0408 04:52:49.328771   11556 preload.go:173] Found /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 04:52:49.328777   11556 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.0 on docker
	I0408 04:52:49.328830   11556 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/newest-cni-070000/config.json ...
	I0408 04:52:49.329457   11556 start.go:360] acquireMachinesLock for newest-cni-070000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:52:49.329494   11556 start.go:364] duration metric: took 29.5µs to acquireMachinesLock for "newest-cni-070000"
	I0408 04:52:49.329504   11556 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:52:49.329509   11556 fix.go:54] fixHost starting: 
	I0408 04:52:49.329624   11556 fix.go:112] recreateIfNeeded on newest-cni-070000: state=Stopped err=<nil>
	W0408 04:52:49.329632   11556 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:52:49.332886   11556 out.go:177] * Restarting existing qemu2 VM for "newest-cni-070000" ...
	I0408 04:52:49.340839   11556 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:04:7a:22:c8:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/disk.qcow2
	I0408 04:52:49.342932   11556 main.go:141] libmachine: STDOUT: 
	I0408 04:52:49.342954   11556 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:52:49.342980   11556 fix.go:56] duration metric: took 13.470208ms for fixHost
	I0408 04:52:49.342985   11556 start.go:83] releasing machines lock for "newest-cni-070000", held for 13.486916ms
	W0408 04:52:49.342991   11556 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:52:49.343025   11556 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:52:49.343029   11556 start.go:728] Will try again in 5 seconds ...
	I0408 04:52:54.344697   11556 start.go:360] acquireMachinesLock for newest-cni-070000: {Name:mka28a4fcd336b79bf42caa154ba006a43c89ecb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 04:52:54.345157   11556 start.go:364] duration metric: took 313.25µs to acquireMachinesLock for "newest-cni-070000"
	I0408 04:52:54.345290   11556 start.go:96] Skipping create...Using existing machine configuration
	I0408 04:52:54.345311   11556 fix.go:54] fixHost starting: 
	I0408 04:52:54.346063   11556 fix.go:112] recreateIfNeeded on newest-cni-070000: state=Stopped err=<nil>
	W0408 04:52:54.346089   11556 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 04:52:54.354506   11556 out.go:177] * Restarting existing qemu2 VM for "newest-cni-070000" ...
	I0408 04:52:54.359816   11556 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:04:7a:22:c8:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18588-7343/.minikube/machines/newest-cni-070000/disk.qcow2
	I0408 04:52:54.369805   11556 main.go:141] libmachine: STDOUT: 
	I0408 04:52:54.369864   11556 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 04:52:54.369942   11556 fix.go:56] duration metric: took 24.6325ms for fixHost
	I0408 04:52:54.369963   11556 start.go:83] releasing machines lock for "newest-cni-070000", held for 24.783125ms
	W0408 04:52:54.370099   11556 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-070000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-070000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 04:52:54.378493   11556 out.go:177] 
	W0408 04:52:54.382367   11556 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 04:52:54.382404   11556 out.go:239] * 
	* 
	W0408 04:52:54.384966   11556 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:52:54.394382   11556 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-070000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-070000 -n newest-cni-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-070000 -n newest-cni-070000: exit status 7 (70.32575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-070000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-070000 image list --format=json
start_stop_delete_test.go:304: v1.30.0-rc.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0-rc.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-070000 -n newest-cni-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-070000 -n newest-cni-070000: exit status 7 (31.825417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-070000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-070000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-070000 --alsologtostderr -v=1: exit status 83 (44.81425ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-070000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-070000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:52:54.585909   11570 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:52:54.586067   11570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:54.586071   11570 out.go:304] Setting ErrFile to fd 2...
	I0408 04:52:54.586073   11570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:52:54.586212   11570 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:52:54.586433   11570 out.go:298] Setting JSON to false
	I0408 04:52:54.586441   11570 mustload.go:65] Loading cluster: newest-cni-070000
	I0408 04:52:54.586646   11570 config.go:182] Loaded profile config "newest-cni-070000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-rc.0
	I0408 04:52:54.591181   11570 out.go:177] * The control-plane node newest-cni-070000 host is not running: state=Stopped
	I0408 04:52:54.594389   11570 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-070000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-070000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-070000 -n newest-cni-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-070000 -n newest-cni-070000: exit status 7 (32.1635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-070000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-070000 -n newest-cni-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-070000 -n newest-cni-070000: exit status 7 (32.221708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-070000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.24
12 TestDownloadOnly/v1.29.3/json-events 18.31
13 TestDownloadOnly/v1.29.3/preload-exists 0
16 TestDownloadOnly/v1.29.3/kubectl 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.08
18 TestDownloadOnly/v1.29.3/DeleteAll 0.23
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.23
21 TestDownloadOnly/v1.30.0-rc.0/json-events 18.55
22 TestDownloadOnly/v1.30.0-rc.0/preload-exists 0
25 TestDownloadOnly/v1.30.0-rc.0/kubectl 0
26 TestDownloadOnly/v1.30.0-rc.0/LogsDuration 0.09
27 TestDownloadOnly/v1.30.0-rc.0/DeleteAll 0.23
28 TestDownloadOnly/v1.30.0-rc.0/DeleteAlwaysSucceeds 0.23
30 TestBinaryMirror 0.34
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 9.98
48 TestErrorSpam/start 0.39
49 TestErrorSpam/status 0.1
50 TestErrorSpam/pause 0.13
51 TestErrorSpam/unpause 0.12
52 TestErrorSpam/stop 7.93
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 1.78
64 TestFunctional/serial/CacheCmd/cache/add_local 1.17
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.03
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.24
80 TestFunctional/parallel/DryRun 0.27
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.12
102 TestFunctional/parallel/License 0.21
103 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/ImageCommands/Setup 1.36
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
135 TestFunctional/parallel/ProfileCmd/profile_list 0.11
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_addon-resizer_images 0.16
145 TestFunctional/delete_my-image_image 0.04
146 TestFunctional/delete_minikube_cached_images 0.04
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 3.21
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.33
202 TestMainNoArgs 0.04
249 TestStoppedBinaryUpgrade/Setup 1.41
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
266 TestNoKubernetes/serial/ProfileList 31.48
267 TestNoKubernetes/serial/Stop 2.1
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
279 TestStoppedBinaryUpgrade/MinikubeLogs 0.7
284 TestStartStop/group/old-k8s-version/serial/Stop 1.92
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.14
297 TestStartStop/group/no-preload/serial/Stop 3.22
298 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
306 TestStartStop/group/embed-certs/serial/Stop 3.62
309 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.14
317 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.62
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
322 TestStartStop/group/newest-cni/serial/DeployApp 0
323 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
326 TestStartStop/group/newest-cni/serial/Stop 3.45
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-465000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-465000: exit status 85 (95.754833ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-465000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT |          |
	|         | -p download-only-465000        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=docker     |                      |         |                |                     |          |
	|         | --driver=qemu2                 |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 04:26:13
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 04:26:13.464156    7751 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:26:13.464298    7751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:26:13.464301    7751 out.go:304] Setting ErrFile to fd 2...
	I0408 04:26:13.464304    7751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:26:13.464418    7751 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	W0408 04:26:13.464511    7751 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18588-7343/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18588-7343/.minikube/config/config.json: no such file or directory
	I0408 04:26:13.465711    7751 out.go:298] Setting JSON to true
	I0408 04:26:13.485199    7751 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5142,"bootTime":1712570431,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:26:13.485260    7751 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:26:13.491061    7751 out.go:97] [download-only-465000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:26:13.496220    7751 out.go:169] MINIKUBE_LOCATION=18588
	I0408 04:26:13.491163    7751 notify.go:220] Checking for updates...
	W0408 04:26:13.491186    7751 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball: no such file or directory
	I0408 04:26:13.505617    7751 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:26:13.510196    7751 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:26:13.513603    7751 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:26:13.517434    7751 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	W0408 04:26:13.524829    7751 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 04:26:13.525025    7751 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:26:13.529329    7751 out.go:97] Using the qemu2 driver based on user configuration
	I0408 04:26:13.529352    7751 start.go:297] selected driver: qemu2
	I0408 04:26:13.529368    7751 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:26:13.529470    7751 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:26:13.533137    7751 out.go:169] Automatically selected the socket_vmnet network
	I0408 04:26:13.539393    7751 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0408 04:26:13.539492    7751 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 04:26:13.539601    7751 cni.go:84] Creating CNI manager for ""
	I0408 04:26:13.539620    7751 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0408 04:26:13.539672    7751 start.go:340] cluster config:
	{Name:download-only-465000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-465000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:26:13.544563    7751 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:26:13.548116    7751 out.go:97] Downloading VM boot image ...
	I0408 04:26:13.548133    7751 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/iso/arm64/minikube-v1.33.0-1712138767-18566-arm64.iso
	I0408 04:26:17.864369    7751 out.go:97] Starting "download-only-465000" primary control-plane node in "download-only-465000" cluster
	I0408 04:26:17.864396    7751 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0408 04:26:17.920712    7751 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0408 04:26:17.920722    7751 cache.go:56] Caching tarball of preloaded images
	I0408 04:26:17.920939    7751 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0408 04:26:17.927803    7751 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0408 04:26:17.927810    7751 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0408 04:26:18.003040    7751 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0408 04:26:23.528647    7751 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0408 04:26:23.528821    7751 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0408 04:26:24.226335    7751 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0408 04:26:24.226522    7751 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/download-only-465000/config.json ...
	I0408 04:26:24.226551    7751 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/download-only-465000/config.json: {Name:mk5cedd07cfbe42396ac5afb2a307579f9beedc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:26:24.226780    7751 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0408 04:26:24.226956    7751 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0408 04:26:24.910954    7751 out.go:169] 
	W0408 04:26:24.917026    7751 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18588-7343/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1067bf240 0x1067bf240 0x1067bf240 0x1067bf240 0x1067bf240 0x1067bf240 0x1067bf240] Decompressors:map[bz2:0x140005b7630 gz:0x140005b7638 tar:0x140005b75e0 tar.bz2:0x140005b75f0 tar.gz:0x140005b7600 tar.xz:0x140005b7610 tar.zst:0x140005b7620 tbz2:0x140005b75f0 tgz:0x140005b7600 txz:0x140005b7610 tzst:0x140005b7620 xz:0x140005b7640 zip:0x140005b7650 zst:0x140005b7648] Getters:map[file:0x140022148c0 http:0x140005f65f0 https:0x140005f6640] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0408 04:26:24.917058    7751 out_reason.go:110] 
	W0408 04:26:24.924915    7751 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 04:26:24.928833    7751 out.go:169] 
	
	
	* The control-plane node download-only-465000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-465000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-465000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (18.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-878000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-878000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=qemu2 : (18.30653375s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (18.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
--- PASS: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-878000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-878000: exit status 85 (78.54825ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-465000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT |                     |
	|         | -p download-only-465000        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=qemu2                 |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT | 08 Apr 24 04:26 PDT |
	| delete  | -p download-only-465000        | download-only-465000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT | 08 Apr 24 04:26 PDT |
	| start   | -o=json --download-only        | download-only-878000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT |                     |
	|         | -p download-only-878000        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=qemu2                 |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 04:26:25
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 04:26:25.612443    7785 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:26:25.612555    7785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:26:25.612559    7785 out.go:304] Setting ErrFile to fd 2...
	I0408 04:26:25.612561    7785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:26:25.612686    7785 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:26:25.613760    7785 out.go:298] Setting JSON to true
	I0408 04:26:25.629839    7785 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5154,"bootTime":1712570431,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:26:25.629915    7785 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:26:25.634882    7785 out.go:97] [download-only-878000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:26:25.638793    7785 out.go:169] MINIKUBE_LOCATION=18588
	I0408 04:26:25.634953    7785 notify.go:220] Checking for updates...
	I0408 04:26:25.645850    7785 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:26:25.648854    7785 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:26:25.651818    7785 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:26:25.653495    7785 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	W0408 04:26:25.659853    7785 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 04:26:25.660057    7785 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:26:25.663797    7785 out.go:97] Using the qemu2 driver based on user configuration
	I0408 04:26:25.663805    7785 start.go:297] selected driver: qemu2
	I0408 04:26:25.663808    7785 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:26:25.663850    7785 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:26:25.666770    7785 out.go:169] Automatically selected the socket_vmnet network
	I0408 04:26:25.672131    7785 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0408 04:26:25.672226    7785 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 04:26:25.672269    7785 cni.go:84] Creating CNI manager for ""
	I0408 04:26:25.672277    7785 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:26:25.672287    7785 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 04:26:25.672332    7785 start.go:340] cluster config:
	{Name:download-only-878000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:26:25.676823    7785 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:26:25.679834    7785 out.go:97] Starting "download-only-878000" primary control-plane node in "download-only-878000" cluster
	I0408 04:26:25.679843    7785 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:26:25.731891    7785 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:26:25.731908    7785 cache.go:56] Caching tarball of preloaded images
	I0408 04:26:25.733987    7785 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:26:25.737356    7785 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0408 04:26:25.737363    7785 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0408 04:26:25.809881    7785 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4?checksum=md5:c0bb0715201da444334d968c298f45eb -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 04:26:30.076646    7785 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0408 04:26:30.076810    7785 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0408 04:26:30.634579    7785 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 04:26:30.634783    7785 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/download-only-878000/config.json ...
	I0408 04:26:30.634801    7785 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/download-only-878000/config.json: {Name:mke55c5659d9a1a4e0e65561c28f9bf57da93a9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:26:30.635109    7785 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 04:26:30.635384    7785 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/darwin/arm64/v1.29.3/kubectl
	
	
	* The control-plane node download-only-878000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-878000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-878000
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/json-events (18.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-444000 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-444000 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.0 --container-runtime=docker --driver=qemu2 : (18.545343458s)
--- PASS: TestDownloadOnly/v1.30.0-rc.0/json-events (18.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-444000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-444000: exit status 85 (90.6185ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-465000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT |                     |
	|         | -p download-only-465000           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	|         | --driver=qemu2                    |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT | 08 Apr 24 04:26 PDT |
	| delete  | -p download-only-465000           | download-only-465000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT | 08 Apr 24 04:26 PDT |
	| start   | -o=json --download-only           | download-only-878000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT |                     |
	|         | -p download-only-878000           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3      |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	|         | --driver=qemu2                    |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT | 08 Apr 24 04:26 PDT |
	| delete  | -p download-only-878000           | download-only-878000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT | 08 Apr 24 04:26 PDT |
	| start   | -o=json --download-only           | download-only-444000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 04:26 PDT |                     |
	|         | -p download-only-444000           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0 |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	|         | --driver=qemu2                    |                      |         |                |                     |                     |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 04:26:44
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 04:26:44.456963    7824 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:26:44.457080    7824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:26:44.457083    7824 out.go:304] Setting ErrFile to fd 2...
	I0408 04:26:44.457086    7824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:26:44.457204    7824 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:26:44.458221    7824 out.go:298] Setting JSON to true
	I0408 04:26:44.474230    7824 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5173,"bootTime":1712570431,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:26:44.474292    7824 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:26:44.479368    7824 out.go:97] [download-only-444000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:26:44.483326    7824 out.go:169] MINIKUBE_LOCATION=18588
	I0408 04:26:44.479469    7824 notify.go:220] Checking for updates...
	I0408 04:26:44.491338    7824 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:26:44.494325    7824 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:26:44.497296    7824 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:26:44.500317    7824 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	W0408 04:26:44.506226    7824 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 04:26:44.506374    7824 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:26:44.509316    7824 out.go:97] Using the qemu2 driver based on user configuration
	I0408 04:26:44.509323    7824 start.go:297] selected driver: qemu2
	I0408 04:26:44.509326    7824 start.go:901] validating driver "qemu2" against <nil>
	I0408 04:26:44.509368    7824 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 04:26:44.512272    7824 out.go:169] Automatically selected the socket_vmnet network
	I0408 04:26:44.517443    7824 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0408 04:26:44.517529    7824 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 04:26:44.517576    7824 cni.go:84] Creating CNI manager for ""
	I0408 04:26:44.517584    7824 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 04:26:44.517590    7824 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 04:26:44.517632    7824 start.go:340] cluster config:
	{Name:download-only-444000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.0 ClusterName:download-only-444000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:26:44.521812    7824 iso.go:125] acquiring lock: {Name:mkad3f120dba06a61dd7bdd1244e169071d2da98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 04:26:44.525268    7824 out.go:97] Starting "download-only-444000" primary control-plane node in "download-only-444000" cluster
	I0408 04:26:44.525275    7824 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime docker
	I0408 04:26:44.577073    7824 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.0/preloaded-images-k8s-v18-v1.30.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0408 04:26:44.577087    7824 cache.go:56] Caching tarball of preloaded images
	I0408 04:26:44.577248    7824 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime docker
	I0408 04:26:44.583940    7824 out.go:97] Downloading Kubernetes v1.30.0-rc.0 preload ...
	I0408 04:26:44.583947    7824 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0408 04:26:44.656927    7824 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.0/preloaded-images-k8s-v18-v1.30.0-rc.0-docker-overlay2-arm64.tar.lz4?checksum=md5:ca6210c132f70f5fa80373401a178c46 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0408 04:26:49.100063    7824 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0408 04:26:49.100232    7824 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0408 04:26:49.643989    7824 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.0 on docker
	I0408 04:26:49.644178    7824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/download-only-444000/config.json ...
	I0408 04:26:49.644195    7824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18588-7343/.minikube/profiles/download-only-444000/config.json: {Name:mk1b5e000b7fc1d84b50794c0ecdaaf9f6929c0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 04:26:49.644454    7824 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime docker
	I0408 04:26:49.644568    7824 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-rc.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18588-7343/.minikube/cache/darwin/arm64/v1.30.0-rc.0/kubectl
	
	
	* The control-plane node download-only-444000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-444000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-444000
--- PASS: TestDownloadOnly/v1.30.0-rc.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-542000 --alsologtostderr --binary-mirror http://127.0.0.1:51009 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-542000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-542000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-580000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-580000: exit status 85 (64.605084ms)

                                                
                                                
-- stdout --
	* Profile "addons-580000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-580000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-580000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-580000: exit status 85 (60.772709ms)

                                                
                                                
-- stdout --
	* Profile "addons-580000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-580000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.98s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.98s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 status: exit status 7 (33.08775ms)

                                                
                                                
-- stdout --
	nospam-294000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 status: exit status 7 (31.845041ms)

                                                
                                                
-- stdout --
	nospam-294000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 status: exit status 7 (32.325167ms)

                                                
                                                
-- stdout --
	nospam-294000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 pause: exit status 83 (42.86425ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-294000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-294000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 pause: exit status 83 (42.9365ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-294000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-294000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 pause: exit status 83 (41.646ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-294000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-294000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 unpause: exit status 83 (41.925166ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-294000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-294000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 unpause: exit status 83 (41.773208ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-294000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-294000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 unpause: exit status 83 (38.775125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-294000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-294000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (7.93s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 stop: (2.73786725s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 stop: (3.10137075s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-294000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-294000 stop: (2.085053667s)
--- PASS: TestErrorSpam/stop (7.93s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18588-7343/.minikube/files/etc/test/nested/copy/7749/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-756000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local295940508/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 cache add minikube-local-cache-test:functional-756000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 cache delete minikube-local-cache-test:functional-756000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-756000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 config get cpus: exit status 14 (33.22725ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 config get cpus: exit status 14 (36.19725ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-756000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-756000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (162.067542ms)

                                                
                                                
-- stdout --
	* [functional-756000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:28:46.262622    8456 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:28:46.262772    8456 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:28:46.262776    8456 out.go:304] Setting ErrFile to fd 2...
	I0408 04:28:46.262779    8456 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:28:46.262935    8456 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:28:46.264269    8456 out.go:298] Setting JSON to false
	I0408 04:28:46.283489    8456 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5295,"bootTime":1712570431,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:28:46.283554    8456 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:28:46.289217    8456 out.go:177] * [functional-756000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 04:28:46.297186    8456 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:28:46.297271    8456 notify.go:220] Checking for updates...
	I0408 04:28:46.301215    8456 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:28:46.304182    8456 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:28:46.307203    8456 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:28:46.310189    8456 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:28:46.313183    8456 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:28:46.316453    8456 config.go:182] Loaded profile config "functional-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:28:46.316755    8456 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:28:46.321098    8456 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 04:28:46.328184    8456 start.go:297] selected driver: qemu2
	I0408 04:28:46.328190    8456 start.go:901] validating driver "qemu2" against &{Name:functional-756000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-756000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:28:46.328271    8456 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:28:46.335094    8456 out.go:177] 
	W0408 04:28:46.339180    8456 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0408 04:28:46.343151    8456 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-756000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-756000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-756000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (111.056709ms)

                                                
                                                
-- stdout --
	* [functional-756000] minikube v1.33.0-beta.0 sur Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 04:28:46.485839    8467 out.go:291] Setting OutFile to fd 1 ...
	I0408 04:28:46.485973    8467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:28:46.485976    8467 out.go:304] Setting ErrFile to fd 2...
	I0408 04:28:46.485979    8467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 04:28:46.486107    8467 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18588-7343/.minikube/bin
	I0408 04:28:46.487519    8467 out.go:298] Setting JSON to false
	I0408 04:28:46.504263    8467 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5295,"bootTime":1712570431,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0408 04:28:46.504353    8467 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 04:28:46.509198    8467 out.go:177] * [functional-756000] minikube v1.33.0-beta.0 sur Darwin 14.3.1 (arm64)
	I0408 04:28:46.515173    8467 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 04:28:46.519198    8467 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	I0408 04:28:46.515230    8467 notify.go:220] Checking for updates...
	I0408 04:28:46.523154    8467 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 04:28:46.526149    8467 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 04:28:46.529198    8467 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	I0408 04:28:46.532235    8467 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 04:28:46.535510    8467 config.go:182] Loaded profile config "functional-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 04:28:46.535768    8467 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 04:28:46.540142    8467 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0408 04:28:46.547216    8467 start.go:297] selected driver: qemu2
	I0408 04:28:46.547223    8467 start.go:901] validating driver "qemu2" against &{Name:functional-756000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-756000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 04:28:46.547292    8467 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 04:28:46.554045    8467 out.go:177] 
	W0408 04:28:46.558151    8467 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0408 04:28:46.562171    8467 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.318423833s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-756000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-756000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 image rm gcr.io/google-containers/addon-resizer:functional-756000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-756000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 image save --daemon gcr.io/google-containers/addon-resizer:functional-756000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-756000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "71.496875ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "35.803084ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "76.0865ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "36.671875ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.014920333s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-756000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-756000
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-756000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-756000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.21s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-703000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-703000 --output=json --user=testUser: (3.207684792s)
--- PASS: TestJSONOutput/stop/Command (3.21s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-404000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-404000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.825ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e529cc52-8d58-42c2-ae3f-dab9e9803bd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-404000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6cec429c-dbde-44ab-95d9-4a834dcf2e66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18588"}}
	{"specversion":"1.0","id":"c68255e1-99f3-4c87-b4a5-72af1bb0e07c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig"}}
	{"specversion":"1.0","id":"5cabb133-56f4-4465-9938-1c6a0964ddd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"5b8a0b02-1d2d-4d10-9aa6-5098105ea66c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"27a39f09-8529-4820-8826-aff7f233246b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube"}}
	{"specversion":"1.0","id":"de2ab7d0-e1b8-4820-9da4-ef584af9d4fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6883f258-b050-44d8-94f1-69db36b41e7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-404000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-404000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-196000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-196000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (110.8295ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-196000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18588
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18588-7343/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18588-7343/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-196000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-196000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.016ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-196000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-196000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.678410209s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.797242167s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-196000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-196000: (2.096432375s)
--- PASS: TestNoKubernetes/serial/Stop (2.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-196000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-196000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.597458ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-196000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-196000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-462000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-820000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-820000 --alsologtostderr -v=3: (1.917403542s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-820000 -n old-k8s-version-820000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-820000 -n old-k8s-version-820000: exit status 7 (38.737666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-820000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-272000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-272000 --alsologtostderr -v=3: (3.222026375s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000: exit status 7 (61.163458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-272000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-967000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-967000 --alsologtostderr -v=3: (3.618659584s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-967000 -n embed-certs-967000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-967000 -n embed-certs-967000: exit status 7 (65.32275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-967000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-730000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-730000 --alsologtostderr -v=3: (3.62096275s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000: exit status 7 (61.731459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-730000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-070000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-070000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-070000 --alsologtostderr -v=3: (3.4461375s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-070000 -n newest-cni-070000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-070000 -n newest-cni-070000: exit status 7 (64.068166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-070000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-756000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2885574628/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1712575687631073000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2885574628/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1712575687631073000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2885574628/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1712575687631073000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2885574628/001/test-1712575687631073000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (58.432792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.569209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.462958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.796041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.298542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.651ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.044541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "sudo umount -f /mount-9p": exit status 83 (47.619208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-756000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-756000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2885574628/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (13.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-756000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port519869252/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (65.221208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.703291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.312333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.009ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.559125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.376625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.911584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "sudo umount -f /mount-9p": exit status 83 (47.71ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-756000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-756000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port519869252/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (13.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-756000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3682670148/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-756000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3682670148/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-756000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3682670148/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T" /mount1: exit status 83 (88.3845ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T" /mount1: exit status 83 (87.489625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T" /mount1: exit status 83 (88.480208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T" /mount1: exit status 83 (97.425459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T" /mount1: exit status 83 (89.146ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T" /mount1: exit status 83 (89.863291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T" /mount1: exit status 83 (88.920667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-756000 ssh "findmnt -T" /mount1: exit status 83 (88.718208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-756000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-756000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3682670148/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-756000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3682670148/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-756000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3682670148/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (13.95s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-146000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-146000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-146000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-146000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-146000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-146000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-146000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-146000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-146000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-146000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-146000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-146000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-146000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-146000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-146000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-146000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-146000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-146000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-146000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-146000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-146000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-146000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-146000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-146000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-146000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-146000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-146000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-146000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-146000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-146000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-146000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-146000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-146000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-146000"

                                                
                                                
----------------------- debugLogs end: cilium-146000 [took: 2.296491834s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-146000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-146000
--- SKIP: TestNetworkPlugins/group/cilium (2.53s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-629000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-629000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard