Test Report: QEMU_macOS 18771

                    
                      d8f44c85dc50f37f8a74f4a275902bf69829aaa8:2024-04-29:34254
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.81
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.09
27 TestAddons/Setup 10.25
28 TestCertOptions 10.04
29 TestCertExpiration 195.25
30 TestDockerFlags 10.03
31 TestForceSystemdFlag 10.4
32 TestForceSystemdEnv 10.1
38 TestErrorSpam/setup 9.93
47 TestFunctional/serial/StartWithProxy 9.95
49 TestFunctional/serial/SoftStart 5.25
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.05
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.17
61 TestFunctional/serial/MinikubeKubectlCmd 0.64
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.95
63 TestFunctional/serial/ExtraConfig 5.3
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.2
73 TestFunctional/parallel/StatusCmd 0.13
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.13
82 TestFunctional/parallel/CpCmd 0.28
84 TestFunctional/parallel/FileSync 0.08
85 TestFunctional/parallel/CertSync 0.3
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
95 TestFunctional/parallel/Version/components 0.04
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
102 TestFunctional/parallel/DockerEnv/bash 0.05
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.05
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.05
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
110 TestFunctional/parallel/ServiceCmd/Format 0.05
111 TestFunctional/parallel/ServiceCmd/URL 0.04
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 96.71
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.33
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.35
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.52
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 25.94
141 TestMultiControlPlane/serial/StartCluster 10.1
142 TestMultiControlPlane/serial/DeployApp 68.74
143 TestMultiControlPlane/serial/PingHostFromPods 0.09
144 TestMultiControlPlane/serial/AddWorkerNode 0.08
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.1
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.11
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.1
150 TestMultiControlPlane/serial/RestartSecondaryNode 46.19
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.11
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.39
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.1
155 TestMultiControlPlane/serial/StopCluster 3.97
156 TestMultiControlPlane/serial/RestartCluster 5.27
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.11
158 TestMultiControlPlane/serial/AddSecondaryNode 0.08
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.11
162 TestImageBuild/serial/Setup 9.85
165 TestJSONOutput/start/Command 9.8
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.35
197 TestMountStart/serial/StartWithMountFirst 9.93
200 TestMultiNode/serial/FreshStart2Nodes 9.98
201 TestMultiNode/serial/DeployApp2Nodes 106.32
202 TestMultiNode/serial/PingHostFrom2Pods 0.09
203 TestMultiNode/serial/AddNode 0.08
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.11
206 TestMultiNode/serial/CopyFile 0.07
207 TestMultiNode/serial/StopNode 0.15
208 TestMultiNode/serial/StartAfterStop 56.72
209 TestMultiNode/serial/RestartKeepsNodes 8.47
210 TestMultiNode/serial/DeleteNode 0.11
211 TestMultiNode/serial/StopMultiNode 3.22
212 TestMultiNode/serial/RestartMultiNode 5.27
213 TestMultiNode/serial/ValidateNameConflict 20.15
217 TestPreload 10.26
219 TestScheduledStopUnix 10.08
220 TestSkaffold 12.32
223 TestRunningBinaryUpgrade 604.89
225 TestKubernetesUpgrade 18.14
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.11
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 0.9
241 TestStoppedBinaryUpgrade/Upgrade 580.42
243 TestPause/serial/Start 10.17
253 TestNoKubernetes/serial/StartWithK8s 9.77
254 TestNoKubernetes/serial/StartWithStopK8s 5.27
255 TestNoKubernetes/serial/Start 5.31
259 TestNoKubernetes/serial/StartNoArgs 5.33
261 TestNetworkPlugins/group/kindnet/Start 9.82
262 TestNetworkPlugins/group/auto/Start 9.86
263 TestNetworkPlugins/group/flannel/Start 9.83
264 TestNetworkPlugins/group/enable-default-cni/Start 9.79
265 TestNetworkPlugins/group/bridge/Start 9.93
266 TestNetworkPlugins/group/kubenet/Start 9.85
267 TestNetworkPlugins/group/custom-flannel/Start 9.85
268 TestNetworkPlugins/group/calico/Start 9.83
269 TestNetworkPlugins/group/false/Start 9.92
272 TestStartStop/group/old-k8s-version/serial/FirstStart 9.87
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.24
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
281 TestStartStop/group/old-k8s-version/serial/Pause 0.11
283 TestStartStop/group/no-preload/serial/FirstStart 9.79
285 TestStartStop/group/embed-certs/serial/FirstStart 10.86
286 TestStartStop/group/no-preload/serial/DeployApp 0.1
287 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.13
290 TestStartStop/group/no-preload/serial/SecondStart 6.14
291 TestStartStop/group/embed-certs/serial/DeployApp 0.1
292 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
293 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
294 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.13
295 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
296 TestStartStop/group/no-preload/serial/Pause 0.12
299 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.12
301 TestStartStop/group/embed-certs/serial/SecondStart 7.13
302 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
303 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
304 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
305 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.09
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.13
307 TestStartStop/group/embed-certs/serial/Pause 0.12
310 TestStartStop/group/newest-cni/serial/FirstStart 9.92
312 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.84
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
314 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.07
315 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
316 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
321 TestStartStop/group/newest-cni/serial/SecondStart 5.29
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (10.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-363000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-363000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (10.8112405s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e240bfda-7171-4fa0-962d-e9efbfe3e776","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-363000] minikube v1.33.0 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c35f5279-072a-4f06-b29b-0ffc87783baa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18771"}}
	{"specversion":"1.0","id":"8b15023e-912e-41d0-aa80-806417f06f47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig"}}
	{"specversion":"1.0","id":"ebfea975-3e44-4136-8127-0ecba574402c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4e3cbdd4-fe55-4729-9661-3af8be044b23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"09b4c003-a4d0-47e6-ab90-081a4512ab08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube"}}
	{"specversion":"1.0","id":"6ff1d3f6-7aaa-4793-849a-7e21bc5021bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"9ce320d2-2a11-424d-a7c3-5e41a16bd4a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a99aae52-7010-4871-b1d3-81a83ba3d2e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"4d6e107c-fce0-4ba4-91fa-038788eb9327","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6bdd47fe-1141-4235-9c34-ff4c0596a43e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-363000\" primary control-plane node in \"download-only-363000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"17a90f51-cc0d-44d0-8f10-44fae452205b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"623a51fe-f765-4826-b878-4c82914e5350","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1045c0e00 0x1045c0e00 0x1045c0e00 0x1045c0e00 0x1045c0e00 0x1045c0e00 0x1045c0e00] Decompressors:map[bz2:0x1400059d400 gz:0x1400059d408 tar:0x1400059d3b0 tar.bz2:0x1400059d3c0 tar.gz:0x1400059d3d0 tar.xz:0x1400059d3e0 tar.zst:0x1400059d3f0 tbz2:0x1400059d3c0 tgz:0x14
00059d3d0 txz:0x1400059d3e0 tzst:0x1400059d3f0 xz:0x1400059d410 zip:0x1400059d420 zst:0x1400059d418] Getters:map[file:0x140020d2580 http:0x14000c922d0 https:0x14000c92320] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"5a4c4745-2281-4e3e-8c1e-a26dbe177f0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:43:47.460945    6502 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:43:47.461160    6502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:43:47.461164    6502 out.go:304] Setting ErrFile to fd 2...
	I0429 04:43:47.461166    6502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:43:47.461296    6502 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	W0429 04:43:47.461390    6502 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18771-6092/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18771-6092/.minikube/config/config.json: no such file or directory
	I0429 04:43:47.462850    6502 out.go:298] Setting JSON to true
	I0429 04:43:47.480996    6502 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4398,"bootTime":1714386629,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:43:47.481068    6502 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:43:47.485884    6502 out.go:97] [download-only-363000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:43:47.490140    6502 out.go:169] MINIKUBE_LOCATION=18771
	I0429 04:43:47.486015    6502 notify.go:220] Checking for updates...
	W0429 04:43:47.486058    6502 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball: no such file or directory
	I0429 04:43:47.498854    6502 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:43:47.502642    6502 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:43:47.506161    6502 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:43:47.508993    6502 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	W0429 04:43:47.515057    6502 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 04:43:47.515263    6502 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:43:47.516957    6502 out.go:97] Using the qemu2 driver based on user configuration
	I0429 04:43:47.516975    6502 start.go:297] selected driver: qemu2
	I0429 04:43:47.516989    6502 start.go:901] validating driver "qemu2" against <nil>
	I0429 04:43:47.517078    6502 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 04:43:47.520034    6502 out.go:169] Automatically selected the socket_vmnet network
	I0429 04:43:47.525274    6502 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0429 04:43:47.525419    6502 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 04:43:47.525473    6502 cni.go:84] Creating CNI manager for ""
	I0429 04:43:47.525491    6502 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0429 04:43:47.525543    6502 start.go:340] cluster config:
	{Name:download-only-363000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:43:47.530136    6502 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:43:47.533129    6502 out.go:97] Downloading VM boot image ...
	I0429 04:43:47.533159    6502 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso
	I0429 04:43:51.893550    6502 out.go:97] Starting "download-only-363000" primary control-plane node in "download-only-363000" cluster
	I0429 04:43:51.893576    6502 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 04:43:51.948542    6502 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0429 04:43:51.948547    6502 cache.go:56] Caching tarball of preloaded images
	I0429 04:43:51.949680    6502 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 04:43:51.957686    6502 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0429 04:43:51.957692    6502 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0429 04:43:52.030855    6502 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0429 04:43:57.170059    6502 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0429 04:43:57.170225    6502 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0429 04:43:57.865604    6502 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0429 04:43:57.865813    6502 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/download-only-363000/config.json ...
	I0429 04:43:57.865832    6502 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/download-only-363000/config.json: {Name:mkc09461e31cba7ecb8f15df0ace1215d278d8e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 04:43:57.867448    6502 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 04:43:57.867629    6502 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0429 04:43:58.194257    6502 out.go:169] 
	W0429 04:43:58.198279    6502 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1045c0e00 0x1045c0e00 0x1045c0e00 0x1045c0e00 0x1045c0e00 0x1045c0e00 0x1045c0e00] Decompressors:map[bz2:0x1400059d400 gz:0x1400059d408 tar:0x1400059d3b0 tar.bz2:0x1400059d3c0 tar.gz:0x1400059d3d0 tar.xz:0x1400059d3e0 tar.zst:0x1400059d3f0 tbz2:0x1400059d3c0 tgz:0x1400059d3d0 txz:0x1400059d3e0 tzst:0x1400059d3f0 xz:0x1400059d410 zip:0x1400059d420 zst:0x1400059d418] Getters:map[file:0x140020d2580 http:0x14000c922d0 https:0x14000c92320] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0429 04:43:58.198304    6502 out_reason.go:110] 
	W0429 04:43:58.205209    6502 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 04:43:58.209215    6502 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-363000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (10.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-525000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-525000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.941517541s)

                                                
                                                
-- stdout --
	* [offline-docker-525000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-525000" primary control-plane node in "offline-docker-525000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-525000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:54:47.128676    7980 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:54:47.128818    7980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:54:47.128825    7980 out.go:304] Setting ErrFile to fd 2...
	I0429 04:54:47.128828    7980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:54:47.128955    7980 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:54:47.130100    7980 out.go:298] Setting JSON to false
	I0429 04:54:47.147706    7980 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5058,"bootTime":1714386629,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:54:47.147813    7980 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:54:47.153349    7980 out.go:177] * [offline-docker-525000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:54:47.161414    7980 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 04:54:47.161415    7980 notify.go:220] Checking for updates...
	I0429 04:54:47.168372    7980 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:54:47.171393    7980 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:54:47.174305    7980 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:54:47.177348    7980 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 04:54:47.180357    7980 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:54:47.183674    7980 config.go:182] Loaded profile config "multinode-368000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:54:47.183726    7980 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:54:47.187305    7980 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 04:54:47.194365    7980 start.go:297] selected driver: qemu2
	I0429 04:54:47.194373    7980 start.go:901] validating driver "qemu2" against <nil>
	I0429 04:54:47.194379    7980 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:54:47.196852    7980 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 04:54:47.199349    7980 out.go:177] * Automatically selected the socket_vmnet network
	I0429 04:54:47.202405    7980 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 04:54:47.202443    7980 cni.go:84] Creating CNI manager for ""
	I0429 04:54:47.202450    7980 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 04:54:47.202453    7980 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 04:54:47.202483    7980 start.go:340] cluster config:
	{Name:offline-docker-525000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-525000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:54:47.207053    7980 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:54:47.214359    7980 out.go:177] * Starting "offline-docker-525000" primary control-plane node in "offline-docker-525000" cluster
	I0429 04:54:47.218321    7980 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 04:54:47.218355    7980 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 04:54:47.218360    7980 cache.go:56] Caching tarball of preloaded images
	I0429 04:54:47.218445    7980 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 04:54:47.218451    7980 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 04:54:47.218507    7980 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/offline-docker-525000/config.json ...
	I0429 04:54:47.218517    7980 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/offline-docker-525000/config.json: {Name:mkb4f0395da6d09e8e80b4970ab861d180125feb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 04:54:47.218827    7980 start.go:360] acquireMachinesLock for offline-docker-525000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:54:47.218862    7980 start.go:364] duration metric: took 25.125µs to acquireMachinesLock for "offline-docker-525000"
	I0429 04:54:47.218873    7980 start.go:93] Provisioning new machine with config: &{Name:offline-docker-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-525000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 04:54:47.218917    7980 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 04:54:47.230188    7980 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0429 04:54:47.245853    7980 start.go:159] libmachine.API.Create for "offline-docker-525000" (driver="qemu2")
	I0429 04:54:47.245889    7980 client.go:168] LocalClient.Create starting
	I0429 04:54:47.245967    7980 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 04:54:47.245999    7980 main.go:141] libmachine: Decoding PEM data...
	I0429 04:54:47.246013    7980 main.go:141] libmachine: Parsing certificate...
	I0429 04:54:47.246062    7980 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 04:54:47.246084    7980 main.go:141] libmachine: Decoding PEM data...
	I0429 04:54:47.246094    7980 main.go:141] libmachine: Parsing certificate...
	I0429 04:54:47.246529    7980 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 04:54:47.388613    7980 main.go:141] libmachine: Creating SSH key...
	I0429 04:54:47.429593    7980 main.go:141] libmachine: Creating Disk image...
	I0429 04:54:47.429602    7980 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 04:54:47.430934    7980 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/offline-docker-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/offline-docker-525000/disk.qcow2
	I0429 04:54:47.443851    7980 main.go:141] libmachine: STDOUT: 
	I0429 04:54:47.443883    7980 main.go:141] libmachine: STDERR: 
	I0429 04:54:47.443961    7980 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/offline-docker-525000/disk.qcow2 +20000M
	I0429 04:54:47.456676    7980 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 04:54:47.456703    7980 main.go:141] libmachine: STDERR: 
	I0429 04:54:47.456724    7980 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/offline-docker-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/offline-docker-525000/disk.qcow2
	I0429 04:54:47.456730    7980 main.go:141] libmachine: Starting QEMU VM...
	I0429 04:54:47.456763    7980 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/offline-docker-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/offline-docker-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/offline-docker-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:d9:d9:43:9a:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/offline-docker-525000/disk.qcow2
	I0429 04:54:47.458453    7980 main.go:141] libmachine: STDOUT: 
	I0429 04:54:47.458469    7980 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:54:47.458489    7980 client.go:171] duration metric: took 212.596584ms to LocalClient.Create
	I0429 04:54:49.460546    7980 start.go:128] duration metric: took 2.241641417s to createHost
	I0429 04:54:49.460561    7980 start.go:83] releasing machines lock for "offline-docker-525000", held for 2.241714291s
	W0429 04:54:49.460576    7980 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:54:49.468225    7980 out.go:177] * Deleting "offline-docker-525000" in qemu2 ...
	W0429 04:54:49.477674    7980 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:54:49.477682    7980 start.go:728] Will try again in 5 seconds ...
	I0429 04:54:54.479929    7980 start.go:360] acquireMachinesLock for offline-docker-525000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:54:54.480353    7980 start.go:364] duration metric: took 311.666µs to acquireMachinesLock for "offline-docker-525000"
	I0429 04:54:54.480509    7980 start.go:93] Provisioning new machine with config: &{Name:offline-docker-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-525000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 04:54:54.480807    7980 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 04:54:54.489483    7980 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0429 04:54:54.539091    7980 start.go:159] libmachine.API.Create for "offline-docker-525000" (driver="qemu2")
	I0429 04:54:54.539164    7980 client.go:168] LocalClient.Create starting
	I0429 04:54:54.539279    7980 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 04:54:54.539342    7980 main.go:141] libmachine: Decoding PEM data...
	I0429 04:54:54.539360    7980 main.go:141] libmachine: Parsing certificate...
	I0429 04:54:54.539420    7980 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 04:54:54.539463    7980 main.go:141] libmachine: Decoding PEM data...
	I0429 04:54:54.539479    7980 main.go:141] libmachine: Parsing certificate...
	I0429 04:54:54.540109    7980 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 04:54:54.690182    7980 main.go:141] libmachine: Creating SSH key...
	I0429 04:54:54.971860    7980 main.go:141] libmachine: Creating Disk image...
	I0429 04:54:54.971871    7980 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 04:54:54.972074    7980 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/offline-docker-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/offline-docker-525000/disk.qcow2
	I0429 04:54:54.984704    7980 main.go:141] libmachine: STDOUT: 
	I0429 04:54:54.984726    7980 main.go:141] libmachine: STDERR: 
	I0429 04:54:54.984791    7980 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/offline-docker-525000/disk.qcow2 +20000M
	I0429 04:54:54.995822    7980 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 04:54:54.995836    7980 main.go:141] libmachine: STDERR: 
	I0429 04:54:54.995846    7980 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/offline-docker-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/offline-docker-525000/disk.qcow2
	I0429 04:54:54.995849    7980 main.go:141] libmachine: Starting QEMU VM...
	I0429 04:54:54.995886    7980 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/offline-docker-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/offline-docker-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/offline-docker-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:de:53:81:b5:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/offline-docker-525000/disk.qcow2
	I0429 04:54:54.997436    7980 main.go:141] libmachine: STDOUT: 
	I0429 04:54:54.997451    7980 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:54:54.997464    7980 client.go:171] duration metric: took 458.299291ms to LocalClient.Create
	I0429 04:54:56.999584    7980 start.go:128] duration metric: took 2.518776084s to createHost
	I0429 04:54:56.999635    7980 start.go:83] releasing machines lock for "offline-docker-525000", held for 2.519282375s
	W0429 04:54:56.999927    7980 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:54:57.010134    7980 out.go:177] 
	W0429 04:54:57.014246    7980 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:54:57.014280    7980 out.go:239] * 
	* 
	W0429 04:54:57.016707    7980 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 04:54:57.026199    7980 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-525000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-04-29 04:54:57.039845 -0700 PDT m=+669.672395710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-525000 -n offline-docker-525000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-525000 -n offline-docker-525000: exit status 7 (45.418958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-525000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-525000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-525000
--- FAIL: TestOffline (10.09s)

                                                
                                    
x
+
TestAddons/Setup (10.25s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-744000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-744000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.250028792s)

                                                
                                                
-- stdout --
	* [addons-744000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-744000" primary control-plane node in "addons-744000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-744000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:44:08.731709    6609 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:44:08.731847    6609 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:44:08.731850    6609 out.go:304] Setting ErrFile to fd 2...
	I0429 04:44:08.731853    6609 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:44:08.731976    6609 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:44:08.733030    6609 out.go:298] Setting JSON to false
	I0429 04:44:08.749022    6609 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4419,"bootTime":1714386629,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:44:08.749097    6609 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:44:08.753702    6609 out.go:177] * [addons-744000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:44:08.760721    6609 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 04:44:08.764598    6609 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:44:08.760799    6609 notify.go:220] Checking for updates...
	I0429 04:44:08.770567    6609 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:44:08.773646    6609 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:44:08.776662    6609 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 04:44:08.779582    6609 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:44:08.782719    6609 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:44:08.786625    6609 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 04:44:08.793646    6609 start.go:297] selected driver: qemu2
	I0429 04:44:08.793654    6609 start.go:901] validating driver "qemu2" against <nil>
	I0429 04:44:08.793663    6609 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:44:08.795918    6609 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 04:44:08.799657    6609 out.go:177] * Automatically selected the socket_vmnet network
	I0429 04:44:08.802723    6609 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 04:44:08.802750    6609 cni.go:84] Creating CNI manager for ""
	I0429 04:44:08.802757    6609 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 04:44:08.802761    6609 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 04:44:08.802791    6609 start.go:340] cluster config:
	{Name:addons-744000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-744000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:44:08.807301    6609 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:44:08.814669    6609 out.go:177] * Starting "addons-744000" primary control-plane node in "addons-744000" cluster
	I0429 04:44:08.818618    6609 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 04:44:08.818633    6609 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 04:44:08.818642    6609 cache.go:56] Caching tarball of preloaded images
	I0429 04:44:08.818705    6609 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 04:44:08.818710    6609 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 04:44:08.818932    6609 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/addons-744000/config.json ...
	I0429 04:44:08.818944    6609 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/addons-744000/config.json: {Name:mk3e65df91660d5db6d32c5f18d6686542d71e62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 04:44:08.819334    6609 start.go:360] acquireMachinesLock for addons-744000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:44:08.819405    6609 start.go:364] duration metric: took 64.083µs to acquireMachinesLock for "addons-744000"
	I0429 04:44:08.819419    6609 start.go:93] Provisioning new machine with config: &{Name:addons-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:addons-744000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 04:44:08.819450    6609 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 04:44:08.827672    6609 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0429 04:44:08.848285    6609 start.go:159] libmachine.API.Create for "addons-744000" (driver="qemu2")
	I0429 04:44:08.848315    6609 client.go:168] LocalClient.Create starting
	I0429 04:44:08.848445    6609 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 04:44:09.078333    6609 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 04:44:09.161512    6609 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 04:44:09.366552    6609 main.go:141] libmachine: Creating SSH key...
	I0429 04:44:09.477537    6609 main.go:141] libmachine: Creating Disk image...
	I0429 04:44:09.477546    6609 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 04:44:09.477739    6609 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/addons-744000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/addons-744000/disk.qcow2
	I0429 04:44:09.490620    6609 main.go:141] libmachine: STDOUT: 
	I0429 04:44:09.490641    6609 main.go:141] libmachine: STDERR: 
	I0429 04:44:09.490692    6609 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/addons-744000/disk.qcow2 +20000M
	I0429 04:44:09.501738    6609 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 04:44:09.501756    6609 main.go:141] libmachine: STDERR: 
	I0429 04:44:09.501777    6609 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/addons-744000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/addons-744000/disk.qcow2
	I0429 04:44:09.501781    6609 main.go:141] libmachine: Starting QEMU VM...
	I0429 04:44:09.501811    6609 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/addons-744000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/addons-744000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/addons-744000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:c6:92:51:c1:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/addons-744000/disk.qcow2
	I0429 04:44:09.503475    6609 main.go:141] libmachine: STDOUT: 
	I0429 04:44:09.503491    6609 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:44:09.503508    6609 client.go:171] duration metric: took 655.193584ms to LocalClient.Create
	I0429 04:44:11.505706    6609 start.go:128] duration metric: took 2.686255917s to createHost
	I0429 04:44:11.505765    6609 start.go:83] releasing machines lock for "addons-744000", held for 2.686374791s
	W0429 04:44:11.505867    6609 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:44:11.519080    6609 out.go:177] * Deleting "addons-744000" in qemu2 ...
	W0429 04:44:11.548472    6609 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:44:11.548500    6609 start.go:728] Will try again in 5 seconds ...
	I0429 04:44:16.550682    6609 start.go:360] acquireMachinesLock for addons-744000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:44:16.551136    6609 start.go:364] duration metric: took 360.208µs to acquireMachinesLock for "addons-744000"
	I0429 04:44:16.551244    6609 start.go:93] Provisioning new machine with config: &{Name:addons-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:addons-744000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 04:44:16.551615    6609 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 04:44:16.562363    6609 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0429 04:44:16.612062    6609 start.go:159] libmachine.API.Create for "addons-744000" (driver="qemu2")
	I0429 04:44:16.612108    6609 client.go:168] LocalClient.Create starting
	I0429 04:44:16.612230    6609 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 04:44:16.612292    6609 main.go:141] libmachine: Decoding PEM data...
	I0429 04:44:16.612305    6609 main.go:141] libmachine: Parsing certificate...
	I0429 04:44:16.612385    6609 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 04:44:16.612438    6609 main.go:141] libmachine: Decoding PEM data...
	I0429 04:44:16.612452    6609 main.go:141] libmachine: Parsing certificate...
	I0429 04:44:16.613089    6609 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 04:44:16.762347    6609 main.go:141] libmachine: Creating SSH key...
	I0429 04:44:16.880964    6609 main.go:141] libmachine: Creating Disk image...
	I0429 04:44:16.880969    6609 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 04:44:16.881147    6609 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/addons-744000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/addons-744000/disk.qcow2
	I0429 04:44:16.893953    6609 main.go:141] libmachine: STDOUT: 
	I0429 04:44:16.893975    6609 main.go:141] libmachine: STDERR: 
	I0429 04:44:16.894031    6609 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/addons-744000/disk.qcow2 +20000M
	I0429 04:44:16.904811    6609 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 04:44:16.904829    6609 main.go:141] libmachine: STDERR: 
	I0429 04:44:16.904859    6609 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/addons-744000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/addons-744000/disk.qcow2
	I0429 04:44:16.904864    6609 main.go:141] libmachine: Starting QEMU VM...
	I0429 04:44:16.904892    6609 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/addons-744000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/addons-744000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/addons-744000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:45:19:e1:75:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/addons-744000/disk.qcow2
	I0429 04:44:16.906577    6609 main.go:141] libmachine: STDOUT: 
	I0429 04:44:16.906595    6609 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:44:16.906609    6609 client.go:171] duration metric: took 294.49625ms to LocalClient.Create
	I0429 04:44:18.908830    6609 start.go:128] duration metric: took 2.3571735s to createHost
	I0429 04:44:18.908911    6609 start.go:83] releasing machines lock for "addons-744000", held for 2.357772667s
	W0429 04:44:18.909271    6609 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-744000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-744000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:44:18.920254    6609 out.go:177] 
	W0429 04:44:18.925419    6609 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:44:18.925445    6609 out.go:239] * 
	* 
	W0429 04:44:18.928283    6609 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 04:44:18.936363    6609 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-744000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.25s)

                                                
                                    
x
+
TestCertOptions (10.04s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-495000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-495000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.744907959s)

                                                
                                                
-- stdout --
	* [cert-options-495000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-495000" primary control-plane node in "cert-options-495000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-495000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-495000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-495000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-495000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-495000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (82.843542ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-495000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-495000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-495000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-495000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-495000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-495000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (43.405125ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-495000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-495000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-495000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-495000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-495000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-04-29 04:55:27.220349 -0700 PDT m=+699.853166668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-495000 -n cert-options-495000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-495000 -n cert-options-495000: exit status 7 (32.491709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-495000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-495000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-495000
--- FAIL: TestCertOptions (10.04s)

                                                
                                    
x
+
TestCertExpiration (195.25s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-508000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-508000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.864559542s)

                                                
                                                
-- stdout --
	* [cert-expiration-508000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-508000" primary control-plane node in "cert-expiration-508000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-508000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-508000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-508000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-508000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-508000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.238442417s)

                                                
                                                
-- stdout --
	* [cert-expiration-508000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-508000" primary control-plane node in "cert-expiration-508000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-508000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-508000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-508000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-508000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-508000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-508000" primary control-plane node in "cert-expiration-508000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-508000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-508000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-508000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-04-29 04:58:27.388499 -0700 PDT m=+879.994602751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-508000 -n cert-expiration-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-508000 -n cert-expiration-508000: exit status 7 (46.568375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-508000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-508000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-508000
--- FAIL: TestCertExpiration (195.25s)

                                                
                                    
x
+
TestDockerFlags (10.03s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-285000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-285000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.764710792s)

                                                
                                                
-- stdout --
	* [docker-flags-285000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-285000" primary control-plane node in "docker-flags-285000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-285000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:55:07.319801    8174 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:55:07.319927    8174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:55:07.319931    8174 out.go:304] Setting ErrFile to fd 2...
	I0429 04:55:07.319933    8174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:55:07.320046    8174 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:55:07.321126    8174 out.go:298] Setting JSON to false
	I0429 04:55:07.337198    8174 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5078,"bootTime":1714386629,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:55:07.337265    8174 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:55:07.346331    8174 out.go:177] * [docker-flags-285000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:55:07.353268    8174 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 04:55:07.357082    8174 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:55:07.353295    8174 notify.go:220] Checking for updates...
	I0429 04:55:07.364217    8174 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:55:07.367287    8174 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:55:07.370172    8174 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 04:55:07.373247    8174 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:55:07.376562    8174 config.go:182] Loaded profile config "force-systemd-flag-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:55:07.376638    8174 config.go:182] Loaded profile config "multinode-368000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:55:07.376687    8174 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:55:07.381206    8174 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 04:55:07.388244    8174 start.go:297] selected driver: qemu2
	I0429 04:55:07.388251    8174 start.go:901] validating driver "qemu2" against <nil>
	I0429 04:55:07.388257    8174 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:55:07.390515    8174 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 04:55:07.394141    8174 out.go:177] * Automatically selected the socket_vmnet network
	I0429 04:55:07.397358    8174 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0429 04:55:07.397404    8174 cni.go:84] Creating CNI manager for ""
	I0429 04:55:07.397412    8174 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 04:55:07.397416    8174 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 04:55:07.397457    8174 start.go:340] cluster config:
	{Name:docker-flags-285000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-285000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:55:07.402061    8174 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:55:07.409205    8174 out.go:177] * Starting "docker-flags-285000" primary control-plane node in "docker-flags-285000" cluster
	I0429 04:55:07.413246    8174 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 04:55:07.413263    8174 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 04:55:07.413270    8174 cache.go:56] Caching tarball of preloaded images
	I0429 04:55:07.413331    8174 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 04:55:07.413339    8174 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 04:55:07.413405    8174 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/docker-flags-285000/config.json ...
	I0429 04:55:07.413418    8174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/docker-flags-285000/config.json: {Name:mkc554927baac5571161c5695f7bd72b58269a20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 04:55:07.413656    8174 start.go:360] acquireMachinesLock for docker-flags-285000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:55:07.413693    8174 start.go:364] duration metric: took 29.708µs to acquireMachinesLock for "docker-flags-285000"
	I0429 04:55:07.413706    8174 start.go:93] Provisioning new machine with config: &{Name:docker-flags-285000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-285000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 04:55:07.413740    8174 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 04:55:07.422221    8174 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0429 04:55:07.441223    8174 start.go:159] libmachine.API.Create for "docker-flags-285000" (driver="qemu2")
	I0429 04:55:07.441246    8174 client.go:168] LocalClient.Create starting
	I0429 04:55:07.441310    8174 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 04:55:07.441341    8174 main.go:141] libmachine: Decoding PEM data...
	I0429 04:55:07.441351    8174 main.go:141] libmachine: Parsing certificate...
	I0429 04:55:07.441395    8174 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 04:55:07.441418    8174 main.go:141] libmachine: Decoding PEM data...
	I0429 04:55:07.441425    8174 main.go:141] libmachine: Parsing certificate...
	I0429 04:55:07.441782    8174 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 04:55:07.583898    8174 main.go:141] libmachine: Creating SSH key...
	I0429 04:55:07.625814    8174 main.go:141] libmachine: Creating Disk image...
	I0429 04:55:07.625819    8174 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 04:55:07.626000    8174 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/docker-flags-285000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/docker-flags-285000/disk.qcow2
	I0429 04:55:07.638302    8174 main.go:141] libmachine: STDOUT: 
	I0429 04:55:07.638322    8174 main.go:141] libmachine: STDERR: 
	I0429 04:55:07.638373    8174 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/docker-flags-285000/disk.qcow2 +20000M
	I0429 04:55:07.649073    8174 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 04:55:07.649091    8174 main.go:141] libmachine: STDERR: 
	I0429 04:55:07.649109    8174 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/docker-flags-285000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/docker-flags-285000/disk.qcow2
	I0429 04:55:07.649117    8174 main.go:141] libmachine: Starting QEMU VM...
	I0429 04:55:07.649152    8174 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/docker-flags-285000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/docker-flags-285000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/docker-flags-285000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:12:1b:f4:ff:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/docker-flags-285000/disk.qcow2
	I0429 04:55:07.650785    8174 main.go:141] libmachine: STDOUT: 
	I0429 04:55:07.650801    8174 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:55:07.650819    8174 client.go:171] duration metric: took 209.570459ms to LocalClient.Create
	I0429 04:55:09.652987    8174 start.go:128] duration metric: took 2.239251125s to createHost
	I0429 04:55:09.653048    8174 start.go:83] releasing machines lock for "docker-flags-285000", held for 2.239364417s
	W0429 04:55:09.653153    8174 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:55:09.669131    8174 out.go:177] * Deleting "docker-flags-285000" in qemu2 ...
	W0429 04:55:09.688668    8174 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:55:09.688691    8174 start.go:728] Will try again in 5 seconds ...
	I0429 04:55:14.690846    8174 start.go:360] acquireMachinesLock for docker-flags-285000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:55:14.712953    8174 start.go:364] duration metric: took 21.97275ms to acquireMachinesLock for "docker-flags-285000"
	I0429 04:55:14.713055    8174 start.go:93] Provisioning new machine with config: &{Name:docker-flags-285000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-285000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 04:55:14.713302    8174 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 04:55:14.723969    8174 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0429 04:55:14.773674    8174 start.go:159] libmachine.API.Create for "docker-flags-285000" (driver="qemu2")
	I0429 04:55:14.773721    8174 client.go:168] LocalClient.Create starting
	I0429 04:55:14.773848    8174 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 04:55:14.773947    8174 main.go:141] libmachine: Decoding PEM data...
	I0429 04:55:14.773965    8174 main.go:141] libmachine: Parsing certificate...
	I0429 04:55:14.774040    8174 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 04:55:14.774083    8174 main.go:141] libmachine: Decoding PEM data...
	I0429 04:55:14.774094    8174 main.go:141] libmachine: Parsing certificate...
	I0429 04:55:14.774565    8174 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 04:55:14.927840    8174 main.go:141] libmachine: Creating SSH key...
	I0429 04:55:14.974006    8174 main.go:141] libmachine: Creating Disk image...
	I0429 04:55:14.974011    8174 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 04:55:14.974181    8174 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/docker-flags-285000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/docker-flags-285000/disk.qcow2
	I0429 04:55:14.986360    8174 main.go:141] libmachine: STDOUT: 
	I0429 04:55:14.986380    8174 main.go:141] libmachine: STDERR: 
	I0429 04:55:14.986423    8174 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/docker-flags-285000/disk.qcow2 +20000M
	I0429 04:55:14.997071    8174 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 04:55:14.997095    8174 main.go:141] libmachine: STDERR: 
	I0429 04:55:14.997111    8174 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/docker-flags-285000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/docker-flags-285000/disk.qcow2
	I0429 04:55:14.997118    8174 main.go:141] libmachine: Starting QEMU VM...
	I0429 04:55:14.997151    8174 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/docker-flags-285000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/docker-flags-285000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/docker-flags-285000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:8c:e5:21:01:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/docker-flags-285000/disk.qcow2
	I0429 04:55:14.998764    8174 main.go:141] libmachine: STDOUT: 
	I0429 04:55:14.998786    8174 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:55:14.998803    8174 client.go:171] duration metric: took 225.078083ms to LocalClient.Create
	I0429 04:55:17.000949    8174 start.go:128] duration metric: took 2.287642917s to createHost
	I0429 04:55:17.001004    8174 start.go:83] releasing machines lock for "docker-flags-285000", held for 2.288036125s
	W0429 04:55:17.001340    8174 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-285000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-285000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:55:17.014952    8174 out.go:177] 
	W0429 04:55:17.024287    8174 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:55:17.024322    8174 out.go:239] * 
	* 
	W0429 04:55:17.027048    8174 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 04:55:17.039984    8174 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-285000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-285000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-285000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.631208ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-285000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-285000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-285000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-285000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-285000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-285000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-285000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-285000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-285000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.171333ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-285000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-285000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-285000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-285000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-285000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-285000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-04-29 04:55:17.18152 -0700 PDT m=+689.814249126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-285000 -n docker-flags-285000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-285000 -n docker-flags-285000: exit status 7 (31.403958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-285000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-285000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-285000
--- FAIL: TestDockerFlags (10.03s)

                                                
                                    
x
+
TestForceSystemdFlag (10.4s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-163000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-163000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.173414625s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-163000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-163000" primary control-plane node in "force-systemd-flag-163000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-163000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:55:01.888953    8152 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:55:01.889100    8152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:55:01.889104    8152 out.go:304] Setting ErrFile to fd 2...
	I0429 04:55:01.889106    8152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:55:01.889229    8152 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:55:01.890300    8152 out.go:298] Setting JSON to false
	I0429 04:55:01.906169    8152 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5072,"bootTime":1714386629,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:55:01.906240    8152 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:55:01.913412    8152 out.go:177] * [force-systemd-flag-163000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:55:01.920429    8152 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 04:55:01.920472    8152 notify.go:220] Checking for updates...
	I0429 04:55:01.925406    8152 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:55:01.928392    8152 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:55:01.931272    8152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:55:01.934360    8152 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 04:55:01.941221    8152 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:55:01.944760    8152 config.go:182] Loaded profile config "force-systemd-env-236000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:55:01.944842    8152 config.go:182] Loaded profile config "multinode-368000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:55:01.944884    8152 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:55:01.949341    8152 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 04:55:01.956349    8152 start.go:297] selected driver: qemu2
	I0429 04:55:01.956358    8152 start.go:901] validating driver "qemu2" against <nil>
	I0429 04:55:01.956365    8152 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:55:01.958737    8152 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 04:55:01.961373    8152 out.go:177] * Automatically selected the socket_vmnet network
	I0429 04:55:01.962862    8152 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 04:55:01.962896    8152 cni.go:84] Creating CNI manager for ""
	I0429 04:55:01.962904    8152 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 04:55:01.962908    8152 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 04:55:01.962937    8152 start.go:340] cluster config:
	{Name:force-systemd-flag-163000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:55:01.967335    8152 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:55:01.975393    8152 out.go:177] * Starting "force-systemd-flag-163000" primary control-plane node in "force-systemd-flag-163000" cluster
	I0429 04:55:01.979287    8152 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 04:55:01.979308    8152 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 04:55:01.979315    8152 cache.go:56] Caching tarball of preloaded images
	I0429 04:55:01.979390    8152 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 04:55:01.979396    8152 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 04:55:01.979459    8152 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/force-systemd-flag-163000/config.json ...
	I0429 04:55:01.979470    8152 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/force-systemd-flag-163000/config.json: {Name:mk916806d5ab516a428f13e5c72ee8dce64550c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 04:55:01.979703    8152 start.go:360] acquireMachinesLock for force-systemd-flag-163000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:55:01.979744    8152 start.go:364] duration metric: took 32.333µs to acquireMachinesLock for "force-systemd-flag-163000"
	I0429 04:55:01.979758    8152 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-163000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 04:55:01.979791    8152 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 04:55:01.988337    8152 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0429 04:55:02.006143    8152 start.go:159] libmachine.API.Create for "force-systemd-flag-163000" (driver="qemu2")
	I0429 04:55:02.006171    8152 client.go:168] LocalClient.Create starting
	I0429 04:55:02.006240    8152 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 04:55:02.006271    8152 main.go:141] libmachine: Decoding PEM data...
	I0429 04:55:02.006278    8152 main.go:141] libmachine: Parsing certificate...
	I0429 04:55:02.006314    8152 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 04:55:02.006341    8152 main.go:141] libmachine: Decoding PEM data...
	I0429 04:55:02.006347    8152 main.go:141] libmachine: Parsing certificate...
	I0429 04:55:02.006791    8152 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 04:55:02.149048    8152 main.go:141] libmachine: Creating SSH key...
	I0429 04:55:02.223799    8152 main.go:141] libmachine: Creating Disk image...
	I0429 04:55:02.223805    8152 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 04:55:02.224625    8152 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-flag-163000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-flag-163000/disk.qcow2
	I0429 04:55:02.237087    8152 main.go:141] libmachine: STDOUT: 
	I0429 04:55:02.237106    8152 main.go:141] libmachine: STDERR: 
	I0429 04:55:02.237170    8152 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-flag-163000/disk.qcow2 +20000M
	I0429 04:55:02.248016    8152 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 04:55:02.248049    8152 main.go:141] libmachine: STDERR: 
	I0429 04:55:02.248068    8152 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-flag-163000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-flag-163000/disk.qcow2
	I0429 04:55:02.248072    8152 main.go:141] libmachine: Starting QEMU VM...
	I0429 04:55:02.248104    8152 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-flag-163000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-flag-163000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-flag-163000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:7a:ac:54:37:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-flag-163000/disk.qcow2
	I0429 04:55:02.249839    8152 main.go:141] libmachine: STDOUT: 
	I0429 04:55:02.249861    8152 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:55:02.249884    8152 client.go:171] duration metric: took 243.711041ms to LocalClient.Create
	I0429 04:55:04.252046    8152 start.go:128] duration metric: took 2.27225325s to createHost
	I0429 04:55:04.252178    8152 start.go:83] releasing machines lock for "force-systemd-flag-163000", held for 2.272442542s
	W0429 04:55:04.252233    8152 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:55:04.259513    8152 out.go:177] * Deleting "force-systemd-flag-163000" in qemu2 ...
	W0429 04:55:04.289396    8152 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:55:04.289427    8152 start.go:728] Will try again in 5 seconds ...
	I0429 04:55:09.291548    8152 start.go:360] acquireMachinesLock for force-systemd-flag-163000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:55:09.653183    8152 start.go:364] duration metric: took 361.473333ms to acquireMachinesLock for "force-systemd-flag-163000"
	I0429 04:55:09.653349    8152 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-163000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 04:55:09.653625    8152 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 04:55:09.658482    8152 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0429 04:55:09.706556    8152 start.go:159] libmachine.API.Create for "force-systemd-flag-163000" (driver="qemu2")
	I0429 04:55:09.706611    8152 client.go:168] LocalClient.Create starting
	I0429 04:55:09.706747    8152 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 04:55:09.706805    8152 main.go:141] libmachine: Decoding PEM data...
	I0429 04:55:09.706822    8152 main.go:141] libmachine: Parsing certificate...
	I0429 04:55:09.706896    8152 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 04:55:09.706941    8152 main.go:141] libmachine: Decoding PEM data...
	I0429 04:55:09.706953    8152 main.go:141] libmachine: Parsing certificate...
	I0429 04:55:09.707457    8152 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 04:55:09.864762    8152 main.go:141] libmachine: Creating SSH key...
	I0429 04:55:09.957382    8152 main.go:141] libmachine: Creating Disk image...
	I0429 04:55:09.957387    8152 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 04:55:09.957570    8152 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-flag-163000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-flag-163000/disk.qcow2
	I0429 04:55:09.970260    8152 main.go:141] libmachine: STDOUT: 
	I0429 04:55:09.970280    8152 main.go:141] libmachine: STDERR: 
	I0429 04:55:09.970333    8152 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-flag-163000/disk.qcow2 +20000M
	I0429 04:55:09.981277    8152 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 04:55:09.981293    8152 main.go:141] libmachine: STDERR: 
	I0429 04:55:09.981305    8152 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-flag-163000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-flag-163000/disk.qcow2
	I0429 04:55:09.981310    8152 main.go:141] libmachine: Starting QEMU VM...
	I0429 04:55:09.981341    8152 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-flag-163000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-flag-163000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-flag-163000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:9f:36:70:71:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-flag-163000/disk.qcow2
	I0429 04:55:09.982981    8152 main.go:141] libmachine: STDOUT: 
	I0429 04:55:09.983000    8152 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:55:09.983012    8152 client.go:171] duration metric: took 276.39925ms to LocalClient.Create
	I0429 04:55:11.985227    8152 start.go:128] duration metric: took 2.331565083s to createHost
	I0429 04:55:11.985310    8152 start.go:83] releasing machines lock for "force-systemd-flag-163000", held for 2.33212325s
	W0429 04:55:11.985563    8152 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-163000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-163000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:55:11.990348    8152 out.go:177] 
	W0429 04:55:12.004442    8152 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:55:12.004482    8152 out.go:239] * 
	* 
	W0429 04:55:12.005869    8152 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 04:55:12.017215    8152 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-163000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-163000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-163000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (82.747ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-163000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-163000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-163000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-04-29 04:55:12.118541 -0700 PDT m=+684.751224710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-163000 -n force-systemd-flag-163000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-163000 -n force-systemd-flag-163000: exit status 7 (36.755208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-163000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-163000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-163000
--- FAIL: TestForceSystemdFlag (10.40s)

                                                
                                    
x
+
TestForceSystemdEnv (10.1s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-236000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-236000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.881872625s)

                                                
                                                
-- stdout --
	* [force-systemd-env-236000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-236000" primary control-plane node in "force-systemd-env-236000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-236000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:54:57.221183    8132 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:54:57.221311    8132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:54:57.221315    8132 out.go:304] Setting ErrFile to fd 2...
	I0429 04:54:57.221317    8132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:54:57.221461    8132 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:54:57.222577    8132 out.go:298] Setting JSON to false
	I0429 04:54:57.239221    8132 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5068,"bootTime":1714386629,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:54:57.239283    8132 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:54:57.245340    8132 out.go:177] * [force-systemd-env-236000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:54:57.252220    8132 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 04:54:57.255513    8132 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:54:57.252313    8132 notify.go:220] Checking for updates...
	I0429 04:54:57.261203    8132 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:54:57.264149    8132 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:54:57.267182    8132 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 04:54:57.270168    8132 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0429 04:54:57.272071    8132 config.go:182] Loaded profile config "multinode-368000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:54:57.272115    8132 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:54:57.276171    8132 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 04:54:57.283023    8132 start.go:297] selected driver: qemu2
	I0429 04:54:57.283029    8132 start.go:901] validating driver "qemu2" against <nil>
	I0429 04:54:57.283035    8132 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:54:57.285146    8132 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 04:54:57.288182    8132 out.go:177] * Automatically selected the socket_vmnet network
	I0429 04:54:57.291269    8132 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 04:54:57.291295    8132 cni.go:84] Creating CNI manager for ""
	I0429 04:54:57.291302    8132 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 04:54:57.291306    8132 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 04:54:57.291331    8132 start.go:340] cluster config:
	{Name:force-systemd-env-236000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-236000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:54:57.295411    8132 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:54:57.302153    8132 out.go:177] * Starting "force-systemd-env-236000" primary control-plane node in "force-systemd-env-236000" cluster
	I0429 04:54:57.306479    8132 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 04:54:57.306493    8132 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 04:54:57.306499    8132 cache.go:56] Caching tarball of preloaded images
	I0429 04:54:57.306562    8132 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 04:54:57.306567    8132 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 04:54:57.306613    8132 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/force-systemd-env-236000/config.json ...
	I0429 04:54:57.306622    8132 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/force-systemd-env-236000/config.json: {Name:mka061c5caa971eda1935abf3d2ec68d3dfa6276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 04:54:57.306910    8132 start.go:360] acquireMachinesLock for force-systemd-env-236000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:54:57.306941    8132 start.go:364] duration metric: took 24.334µs to acquireMachinesLock for "force-systemd-env-236000"
	I0429 04:54:57.306952    8132 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-236000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-236000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 04:54:57.306982    8132 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 04:54:57.314193    8132 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0429 04:54:57.329211    8132 start.go:159] libmachine.API.Create for "force-systemd-env-236000" (driver="qemu2")
	I0429 04:54:57.329234    8132 client.go:168] LocalClient.Create starting
	I0429 04:54:57.329301    8132 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 04:54:57.329333    8132 main.go:141] libmachine: Decoding PEM data...
	I0429 04:54:57.329348    8132 main.go:141] libmachine: Parsing certificate...
	I0429 04:54:57.329388    8132 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 04:54:57.329410    8132 main.go:141] libmachine: Decoding PEM data...
	I0429 04:54:57.329418    8132 main.go:141] libmachine: Parsing certificate...
	I0429 04:54:57.329747    8132 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 04:54:57.463825    8132 main.go:141] libmachine: Creating SSH key...
	I0429 04:54:57.618640    8132 main.go:141] libmachine: Creating Disk image...
	I0429 04:54:57.618652    8132 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 04:54:57.618867    8132 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-env-236000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-env-236000/disk.qcow2
	I0429 04:54:57.631825    8132 main.go:141] libmachine: STDOUT: 
	I0429 04:54:57.631845    8132 main.go:141] libmachine: STDERR: 
	I0429 04:54:57.631901    8132 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-env-236000/disk.qcow2 +20000M
	I0429 04:54:57.644078    8132 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 04:54:57.644098    8132 main.go:141] libmachine: STDERR: 
	I0429 04:54:57.644128    8132 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-env-236000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-env-236000/disk.qcow2
	I0429 04:54:57.644132    8132 main.go:141] libmachine: Starting QEMU VM...
	I0429 04:54:57.644167    8132 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-env-236000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-env-236000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-env-236000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:cd:5a:07:e6:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-env-236000/disk.qcow2
	I0429 04:54:57.646069    8132 main.go:141] libmachine: STDOUT: 
	I0429 04:54:57.646084    8132 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:54:57.646103    8132 client.go:171] duration metric: took 316.867083ms to LocalClient.Create
	I0429 04:54:59.648296    8132 start.go:128] duration metric: took 2.341301625s to createHost
	I0429 04:54:59.648381    8132 start.go:83] releasing machines lock for "force-systemd-env-236000", held for 2.341451542s
	W0429 04:54:59.648453    8132 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:54:59.655808    8132 out.go:177] * Deleting "force-systemd-env-236000" in qemu2 ...
	W0429 04:54:59.687474    8132 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:54:59.687508    8132 start.go:728] Will try again in 5 seconds ...
	I0429 04:55:04.689636    8132 start.go:360] acquireMachinesLock for force-systemd-env-236000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:55:04.690102    8132 start.go:364] duration metric: took 339.291µs to acquireMachinesLock for "force-systemd-env-236000"
	I0429 04:55:04.690248    8132 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-236000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-236000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 04:55:04.690575    8132 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 04:55:04.699964    8132 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0429 04:55:04.752747    8132 start.go:159] libmachine.API.Create for "force-systemd-env-236000" (driver="qemu2")
	I0429 04:55:04.752819    8132 client.go:168] LocalClient.Create starting
	I0429 04:55:04.752938    8132 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 04:55:04.753017    8132 main.go:141] libmachine: Decoding PEM data...
	I0429 04:55:04.753038    8132 main.go:141] libmachine: Parsing certificate...
	I0429 04:55:04.753104    8132 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 04:55:04.753155    8132 main.go:141] libmachine: Decoding PEM data...
	I0429 04:55:04.753169    8132 main.go:141] libmachine: Parsing certificate...
	I0429 04:55:04.753696    8132 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 04:55:04.912106    8132 main.go:141] libmachine: Creating SSH key...
	I0429 04:55:05.000203    8132 main.go:141] libmachine: Creating Disk image...
	I0429 04:55:05.000216    8132 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 04:55:05.000403    8132 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-env-236000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-env-236000/disk.qcow2
	I0429 04:55:05.013089    8132 main.go:141] libmachine: STDOUT: 
	I0429 04:55:05.013112    8132 main.go:141] libmachine: STDERR: 
	I0429 04:55:05.013180    8132 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-env-236000/disk.qcow2 +20000M
	I0429 04:55:05.024135    8132 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 04:55:05.024151    8132 main.go:141] libmachine: STDERR: 
	I0429 04:55:05.024169    8132 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-env-236000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-env-236000/disk.qcow2
	I0429 04:55:05.024177    8132 main.go:141] libmachine: Starting QEMU VM...
	I0429 04:55:05.024215    8132 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-env-236000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-env-236000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-env-236000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:be:27:bf:e1:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/force-systemd-env-236000/disk.qcow2
	I0429 04:55:05.025926    8132 main.go:141] libmachine: STDOUT: 
	I0429 04:55:05.025939    8132 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:55:05.025957    8132 client.go:171] duration metric: took 273.135334ms to LocalClient.Create
	I0429 04:55:07.028148    8132 start.go:128] duration metric: took 2.337550667s to createHost
	I0429 04:55:07.028238    8132 start.go:83] releasing machines lock for "force-systemd-env-236000", held for 2.338130917s
	W0429 04:55:07.028663    8132 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-236000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-236000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:55:07.037131    8132 out.go:177] 
	W0429 04:55:07.043273    8132 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:55:07.043469    8132 out.go:239] * 
	* 
	W0429 04:55:07.046308    8132 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 04:55:07.056107    8132 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-236000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-236000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-236000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.047208ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-236000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-236000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-236000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-04-29 04:55:07.153548 -0700 PDT m=+679.786188168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-236000 -n force-systemd-env-236000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-236000 -n force-systemd-env-236000: exit status 7 (35.708417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-236000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-236000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-236000
--- FAIL: TestForceSystemdEnv (10.10s)

                                                
                                    
x
+
TestErrorSpam/setup (9.93s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-789000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-789000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 --driver=qemu2 : exit status 80 (9.929257334s)

                                                
                                                
-- stdout --
	* [nospam-789000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-789000" primary control-plane node in "nospam-789000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-789000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-789000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-789000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-789000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-789000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=18771
- KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-789000" primary control-plane node in "nospam-789000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-789000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-789000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.93s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.95s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-431000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-431000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.874533167s)

                                                
                                                
-- stdout --
	* [functional-431000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-431000" primary control-plane node in "functional-431000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-431000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50992 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50992 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50992 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-431000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-431000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-431000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=18771
- KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-431000" primary control-plane node in "functional-431000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-431000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:50992 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:50992 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:50992 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-431000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000: exit status 7 (73.545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.95s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-431000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-431000 --alsologtostderr -v=8: exit status 80 (5.182690958s)

                                                
                                                
-- stdout --
	* [functional-431000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-431000" primary control-plane node in "functional-431000" cluster
	* Restarting existing qemu2 VM for "functional-431000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-431000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:44:50.435361    6765 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:44:50.435485    6765 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:44:50.435488    6765 out.go:304] Setting ErrFile to fd 2...
	I0429 04:44:50.435490    6765 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:44:50.435620    6765 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:44:50.436671    6765 out.go:298] Setting JSON to false
	I0429 04:44:50.452492    6765 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4461,"bootTime":1714386629,"procs":490,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:44:50.452568    6765 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:44:50.456516    6765 out.go:177] * [functional-431000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:44:50.462220    6765 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 04:44:50.465237    6765 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:44:50.462265    6765 notify.go:220] Checking for updates...
	I0429 04:44:50.469210    6765 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:44:50.472262    6765 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:44:50.475262    6765 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 04:44:50.478257    6765 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:44:50.481558    6765 config.go:182] Loaded profile config "functional-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:44:50.481615    6765 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:44:50.486257    6765 out.go:177] * Using the qemu2 driver based on existing profile
	I0429 04:44:50.493191    6765 start.go:297] selected driver: qemu2
	I0429 04:44:50.493198    6765 start.go:901] validating driver "qemu2" against &{Name:functional-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:functional-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:44:50.493249    6765 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:44:50.495434    6765 cni.go:84] Creating CNI manager for ""
	I0429 04:44:50.495452    6765 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 04:44:50.495496    6765 start.go:340] cluster config:
	{Name:functional-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-431000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:44:50.499482    6765 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:44:50.506209    6765 out.go:177] * Starting "functional-431000" primary control-plane node in "functional-431000" cluster
	I0429 04:44:50.510243    6765 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 04:44:50.510261    6765 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 04:44:50.510269    6765 cache.go:56] Caching tarball of preloaded images
	I0429 04:44:50.510325    6765 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 04:44:50.510343    6765 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 04:44:50.510402    6765 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/functional-431000/config.json ...
	I0429 04:44:50.510901    6765 start.go:360] acquireMachinesLock for functional-431000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:44:50.510933    6765 start.go:364] duration metric: took 25.459µs to acquireMachinesLock for "functional-431000"
	I0429 04:44:50.510942    6765 start.go:96] Skipping create...Using existing machine configuration
	I0429 04:44:50.510947    6765 fix.go:54] fixHost starting: 
	I0429 04:44:50.511050    6765 fix.go:112] recreateIfNeeded on functional-431000: state=Stopped err=<nil>
	W0429 04:44:50.511058    6765 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 04:44:50.520243    6765 out.go:177] * Restarting existing qemu2 VM for "functional-431000" ...
	I0429 04:44:50.523228    6765 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:45:fb:82:db:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/disk.qcow2
	I0429 04:44:50.525126    6765 main.go:141] libmachine: STDOUT: 
	I0429 04:44:50.525149    6765 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:44:50.525176    6765 fix.go:56] duration metric: took 14.228ms for fixHost
	I0429 04:44:50.525180    6765 start.go:83] releasing machines lock for "functional-431000", held for 14.243875ms
	W0429 04:44:50.525186    6765 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:44:50.525214    6765 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:44:50.525218    6765 start.go:728] Will try again in 5 seconds ...
	I0429 04:44:55.525710    6765 start.go:360] acquireMachinesLock for functional-431000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:44:55.526086    6765 start.go:364] duration metric: took 288.542µs to acquireMachinesLock for "functional-431000"
	I0429 04:44:55.526206    6765 start.go:96] Skipping create...Using existing machine configuration
	I0429 04:44:55.526231    6765 fix.go:54] fixHost starting: 
	I0429 04:44:55.526933    6765 fix.go:112] recreateIfNeeded on functional-431000: state=Stopped err=<nil>
	W0429 04:44:55.526959    6765 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 04:44:55.536260    6765 out.go:177] * Restarting existing qemu2 VM for "functional-431000" ...
	I0429 04:44:55.540445    6765 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:45:fb:82:db:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/disk.qcow2
	I0429 04:44:55.549450    6765 main.go:141] libmachine: STDOUT: 
	I0429 04:44:55.549518    6765 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:44:55.549586    6765 fix.go:56] duration metric: took 23.361834ms for fixHost
	I0429 04:44:55.549606    6765 start.go:83] releasing machines lock for "functional-431000", held for 23.502ms
	W0429 04:44:55.549819    6765 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-431000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-431000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:44:55.557368    6765 out.go:177] 
	W0429 04:44:55.561298    6765 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:44:55.561331    6765 out.go:239] * 
	* 
	W0429 04:44:55.564019    6765 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 04:44:55.571240    6765 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-431000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.184389s for "functional-431000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000: exit status 7 (69.010875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (31.640291ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-431000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000: exit status 7 (32.612958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-431000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-431000 get po -A: exit status 1 (26.191125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-431000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-431000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-431000\n"*: args "kubectl --context functional-431000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-431000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000: exit status 7 (32.662416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh sudo crictl images: exit status 83 (45.896833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-431000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (42.845ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-431000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (43.924667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (43.965667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-431000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 kubectl -- --context functional-431000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 kubectl -- --context functional-431000 get pods: exit status 1 (604.785833ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-431000
	* no server found for cluster "functional-431000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-431000 kubectl -- --context functional-431000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000: exit status 7 (33.683833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.95s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-431000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-431000 get pods: exit status 1 (918.500042ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-431000
	* no server found for cluster "functional-431000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-431000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000: exit status 7 (32.196083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.95s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.3s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-431000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-431000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.225997042s)

                                                
                                                
-- stdout --
	* [functional-431000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-431000" primary control-plane node in "functional-431000" cluster
	* Restarting existing qemu2 VM for "functional-431000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-431000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-431000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-431000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.226563s for "functional-431000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000: exit status 7 (70.957333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.30s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-431000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-431000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.464083ms)

                                                
                                                
** stderr ** 
	error: context "functional-431000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-431000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000: exit status 7 (33.077209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 logs: exit status 83 (77.802375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-363000 | jenkins | v1.33.0 | 29 Apr 24 04:43 PDT |                     |
	|         | -p download-only-363000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0 | 29 Apr 24 04:43 PDT | 29 Apr 24 04:43 PDT |
	| delete  | -p download-only-363000                                                  | download-only-363000 | jenkins | v1.33.0 | 29 Apr 24 04:43 PDT | 29 Apr 24 04:43 PDT |
	| start   | -o=json --download-only                                                  | download-only-647000 | jenkins | v1.33.0 | 29 Apr 24 04:43 PDT |                     |
	|         | -p download-only-647000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
	| delete  | -p download-only-647000                                                  | download-only-647000 | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
	| delete  | -p download-only-363000                                                  | download-only-363000 | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
	| delete  | -p download-only-647000                                                  | download-only-647000 | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
	| start   | --download-only -p                                                       | binary-mirror-714000 | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
	|         | binary-mirror-714000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:50957                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-714000                                                  | binary-mirror-714000 | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
	| addons  | enable dashboard -p                                                      | addons-744000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
	|         | addons-744000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-744000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
	|         | addons-744000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-744000 --wait=true                                             | addons-744000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
	|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-744000                                                         | addons-744000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
	| start   | -p nospam-789000 -n=1 --memory=2250 --wait=false                         | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
	|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-789000                                                         | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
	| start   | -p functional-431000                                                     | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-431000                                                     | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-431000 cache add                                              | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-431000 cache add                                              | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-431000 cache add                                              | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-431000 cache add                                              | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
	|         | minikube-local-cache-test:functional-431000                              |                      |         |         |                     |                     |
	| cache   | functional-431000 cache delete                                           | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
	|         | minikube-local-cache-test:functional-431000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
	| ssh     | functional-431000 ssh sudo                                               | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-431000                                                        | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-431000 ssh                                                    | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-431000 cache reload                                           | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
	| ssh     | functional-431000 ssh                                                    | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-431000 kubectl --                                             | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
	|         | --context functional-431000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-431000                                                     | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:45 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 04:45:00
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 04:45:00.676465    6843 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:45:00.676598    6843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:45:00.676599    6843 out.go:304] Setting ErrFile to fd 2...
	I0429 04:45:00.676601    6843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:45:00.676732    6843 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:45:00.677813    6843 out.go:298] Setting JSON to false
	I0429 04:45:00.693734    6843 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4471,"bootTime":1714386629,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:45:00.693799    6843 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:45:00.699223    6843 out.go:177] * [functional-431000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:45:00.714094    6843 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 04:45:00.709142    6843 notify.go:220] Checking for updates...
	I0429 04:45:00.721032    6843 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:45:00.728996    6843 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:45:00.737088    6843 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:45:00.745009    6843 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 04:45:00.753006    6843 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:45:00.756415    6843 config.go:182] Loaded profile config "functional-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:45:00.756472    6843 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:45:00.761074    6843 out.go:177] * Using the qemu2 driver based on existing profile
	I0429 04:45:00.768045    6843 start.go:297] selected driver: qemu2
	I0429 04:45:00.768049    6843 start.go:901] validating driver "qemu2" against &{Name:functional-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:functional-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:45:00.768103    6843 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:45:00.770769    6843 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 04:45:00.770815    6843 cni.go:84] Creating CNI manager for ""
	I0429 04:45:00.770822    6843 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 04:45:00.770879    6843 start.go:340] cluster config:
	{Name:functional-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-431000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:45:00.775838    6843 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:45:00.784086    6843 out.go:177] * Starting "functional-431000" primary control-plane node in "functional-431000" cluster
	I0429 04:45:00.789017    6843 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 04:45:00.789033    6843 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 04:45:00.789044    6843 cache.go:56] Caching tarball of preloaded images
	I0429 04:45:00.789114    6843 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 04:45:00.789118    6843 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 04:45:00.789199    6843 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/functional-431000/config.json ...
	I0429 04:45:00.789721    6843 start.go:360] acquireMachinesLock for functional-431000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:45:00.789760    6843 start.go:364] duration metric: took 33.792µs to acquireMachinesLock for "functional-431000"
	I0429 04:45:00.789770    6843 start.go:96] Skipping create...Using existing machine configuration
	I0429 04:45:00.789776    6843 fix.go:54] fixHost starting: 
	I0429 04:45:00.789916    6843 fix.go:112] recreateIfNeeded on functional-431000: state=Stopped err=<nil>
	W0429 04:45:00.789924    6843 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 04:45:00.801157    6843 out.go:177] * Restarting existing qemu2 VM for "functional-431000" ...
	I0429 04:45:00.807158    6843 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:45:fb:82:db:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/disk.qcow2
	I0429 04:45:00.809913    6843 main.go:141] libmachine: STDOUT: 
	I0429 04:45:00.809944    6843 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:45:00.809983    6843 fix.go:56] duration metric: took 20.207416ms for fixHost
	I0429 04:45:00.809986    6843 start.go:83] releasing machines lock for "functional-431000", held for 20.222375ms
	W0429 04:45:00.809997    6843 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:45:00.810043    6843 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:45:00.810049    6843 start.go:728] Will try again in 5 seconds ...
	I0429 04:45:05.812278    6843 start.go:360] acquireMachinesLock for functional-431000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:45:05.812826    6843 start.go:364] duration metric: took 443.542µs to acquireMachinesLock for "functional-431000"
	I0429 04:45:05.813028    6843 start.go:96] Skipping create...Using existing machine configuration
	I0429 04:45:05.813042    6843 fix.go:54] fixHost starting: 
	I0429 04:45:05.813768    6843 fix.go:112] recreateIfNeeded on functional-431000: state=Stopped err=<nil>
	W0429 04:45:05.813789    6843 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 04:45:05.822260    6843 out.go:177] * Restarting existing qemu2 VM for "functional-431000" ...
	I0429 04:45:05.826497    6843 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:45:fb:82:db:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/disk.qcow2
	I0429 04:45:05.836180    6843 main.go:141] libmachine: STDOUT: 
	I0429 04:45:05.836227    6843 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:45:05.836313    6843 fix.go:56] duration metric: took 23.274ms for fixHost
	I0429 04:45:05.836324    6843 start.go:83] releasing machines lock for "functional-431000", held for 23.42525ms
	W0429 04:45:05.836532    6843 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-431000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:45:05.845276    6843 out.go:177] 
	W0429 04:45:05.848333    6843 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:45:05.848361    6843 out.go:239] * 
	W0429 04:45:05.850956    6843 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 04:45:05.859254    6843 out.go:177] 
	
	
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-431000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-363000 | jenkins | v1.33.0 | 29 Apr 24 04:43 PDT |                     |
|         | -p download-only-363000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0 | 29 Apr 24 04:43 PDT | 29 Apr 24 04:43 PDT |
| delete  | -p download-only-363000                                                  | download-only-363000 | jenkins | v1.33.0 | 29 Apr 24 04:43 PDT | 29 Apr 24 04:43 PDT |
| start   | -o=json --download-only                                                  | download-only-647000 | jenkins | v1.33.0 | 29 Apr 24 04:43 PDT |                     |
|         | -p download-only-647000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
| delete  | -p download-only-647000                                                  | download-only-647000 | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
| delete  | -p download-only-363000                                                  | download-only-363000 | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
| delete  | -p download-only-647000                                                  | download-only-647000 | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
| start   | --download-only -p                                                       | binary-mirror-714000 | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | binary-mirror-714000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50957                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-714000                                                  | binary-mirror-714000 | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
| addons  | enable dashboard -p                                                      | addons-744000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | addons-744000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-744000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | addons-744000                                                            |                      |         |         |                     |                     |
| start   | -p addons-744000 --wait=true                                             | addons-744000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-744000                                                         | addons-744000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
| start   | -p nospam-789000 -n=1 --memory=2250 --wait=false                         | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-789000                                                         | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
| start   | -p functional-431000                                                     | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-431000                                                     | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-431000 cache add                                              | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-431000 cache add                                              | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-431000 cache add                                              | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-431000 cache add                                              | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
|         | minikube-local-cache-test:functional-431000                              |                      |         |         |                     |                     |
| cache   | functional-431000 cache delete                                           | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
|         | minikube-local-cache-test:functional-431000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
| ssh     | functional-431000 ssh sudo                                               | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-431000                                                        | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-431000 ssh                                                    | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-431000 cache reload                                           | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
| ssh     | functional-431000 ssh                                                    | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-431000 kubectl --                                             | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | --context functional-431000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-431000                                                     | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:45 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/04/29 04:45:00
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0429 04:45:00.676465    6843 out.go:291] Setting OutFile to fd 1 ...
I0429 04:45:00.676598    6843 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:45:00.676599    6843 out.go:304] Setting ErrFile to fd 2...
I0429 04:45:00.676601    6843 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:45:00.676732    6843 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
I0429 04:45:00.677813    6843 out.go:298] Setting JSON to false
I0429 04:45:00.693734    6843 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4471,"bootTime":1714386629,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0429 04:45:00.693799    6843 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0429 04:45:00.699223    6843 out.go:177] * [functional-431000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
I0429 04:45:00.714094    6843 out.go:177]   - MINIKUBE_LOCATION=18771
I0429 04:45:00.709142    6843 notify.go:220] Checking for updates...
I0429 04:45:00.721032    6843 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
I0429 04:45:00.728996    6843 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0429 04:45:00.737088    6843 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0429 04:45:00.745009    6843 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
I0429 04:45:00.753006    6843 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0429 04:45:00.756415    6843 config.go:182] Loaded profile config "functional-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 04:45:00.756472    6843 driver.go:392] Setting default libvirt URI to qemu:///system
I0429 04:45:00.761074    6843 out.go:177] * Using the qemu2 driver based on existing profile
I0429 04:45:00.768045    6843 start.go:297] selected driver: qemu2
I0429 04:45:00.768049    6843 start.go:901] validating driver "qemu2" against &{Name:functional-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:functional-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0429 04:45:00.768103    6843 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0429 04:45:00.770769    6843 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0429 04:45:00.770815    6843 cni.go:84] Creating CNI manager for ""
I0429 04:45:00.770822    6843 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0429 04:45:00.770879    6843 start.go:340] cluster config:
{Name:functional-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-431000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0429 04:45:00.775838    6843 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0429 04:45:00.784086    6843 out.go:177] * Starting "functional-431000" primary control-plane node in "functional-431000" cluster
I0429 04:45:00.789017    6843 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0429 04:45:00.789033    6843 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
I0429 04:45:00.789044    6843 cache.go:56] Caching tarball of preloaded images
I0429 04:45:00.789114    6843 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0429 04:45:00.789118    6843 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0429 04:45:00.789199    6843 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/functional-431000/config.json ...
I0429 04:45:00.789721    6843 start.go:360] acquireMachinesLock for functional-431000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0429 04:45:00.789760    6843 start.go:364] duration metric: took 33.792µs to acquireMachinesLock for "functional-431000"
I0429 04:45:00.789770    6843 start.go:96] Skipping create...Using existing machine configuration
I0429 04:45:00.789776    6843 fix.go:54] fixHost starting: 
I0429 04:45:00.789916    6843 fix.go:112] recreateIfNeeded on functional-431000: state=Stopped err=<nil>
W0429 04:45:00.789924    6843 fix.go:138] unexpected machine state, will restart: <nil>
I0429 04:45:00.801157    6843 out.go:177] * Restarting existing qemu2 VM for "functional-431000" ...
I0429 04:45:00.807158    6843 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:45:fb:82:db:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/disk.qcow2
I0429 04:45:00.809913    6843 main.go:141] libmachine: STDOUT: 
I0429 04:45:00.809944    6843 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0429 04:45:00.809983    6843 fix.go:56] duration metric: took 20.207416ms for fixHost
I0429 04:45:00.809986    6843 start.go:83] releasing machines lock for "functional-431000", held for 20.222375ms
W0429 04:45:00.809997    6843 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0429 04:45:00.810043    6843 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0429 04:45:00.810049    6843 start.go:728] Will try again in 5 seconds ...
I0429 04:45:05.812278    6843 start.go:360] acquireMachinesLock for functional-431000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0429 04:45:05.812826    6843 start.go:364] duration metric: took 443.542µs to acquireMachinesLock for "functional-431000"
I0429 04:45:05.813028    6843 start.go:96] Skipping create...Using existing machine configuration
I0429 04:45:05.813042    6843 fix.go:54] fixHost starting: 
I0429 04:45:05.813768    6843 fix.go:112] recreateIfNeeded on functional-431000: state=Stopped err=<nil>
W0429 04:45:05.813789    6843 fix.go:138] unexpected machine state, will restart: <nil>
I0429 04:45:05.822260    6843 out.go:177] * Restarting existing qemu2 VM for "functional-431000" ...
I0429 04:45:05.826497    6843 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:45:fb:82:db:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/disk.qcow2
I0429 04:45:05.836180    6843 main.go:141] libmachine: STDOUT: 
I0429 04:45:05.836227    6843 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0429 04:45:05.836313    6843 fix.go:56] duration metric: took 23.274ms for fixHost
I0429 04:45:05.836324    6843 start.go:83] releasing machines lock for "functional-431000", held for 23.42525ms
W0429 04:45:05.836532    6843 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-431000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0429 04:45:05.845276    6843 out.go:177] 
W0429 04:45:05.848333    6843 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0429 04:45:05.848361    6843 out.go:239] * 
W0429 04:45:05.850956    6843 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0429 04:45:05.859254    6843 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-431000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-431000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd2877455225/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-363000 | jenkins | v1.33.0 | 29 Apr 24 04:43 PDT |                     |
|         | -p download-only-363000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0 | 29 Apr 24 04:43 PDT | 29 Apr 24 04:43 PDT |
| delete  | -p download-only-363000                                                  | download-only-363000 | jenkins | v1.33.0 | 29 Apr 24 04:43 PDT | 29 Apr 24 04:43 PDT |
| start   | -o=json --download-only                                                  | download-only-647000 | jenkins | v1.33.0 | 29 Apr 24 04:43 PDT |                     |
|         | -p download-only-647000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
| delete  | -p download-only-647000                                                  | download-only-647000 | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
| delete  | -p download-only-363000                                                  | download-only-363000 | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
| delete  | -p download-only-647000                                                  | download-only-647000 | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
| start   | --download-only -p                                                       | binary-mirror-714000 | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | binary-mirror-714000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50957                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-714000                                                  | binary-mirror-714000 | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
| addons  | enable dashboard -p                                                      | addons-744000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | addons-744000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-744000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | addons-744000                                                            |                      |         |         |                     |                     |
| start   | -p addons-744000 --wait=true                                             | addons-744000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-744000                                                         | addons-744000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
| start   | -p nospam-789000 -n=1 --memory=2250 --wait=false                         | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-789000 --log_dir                                                  | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-789000                                                         | nospam-789000        | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
| start   | -p functional-431000                                                     | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-431000                                                     | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-431000 cache add                                              | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-431000 cache add                                              | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-431000 cache add                                              | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-431000 cache add                                              | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
|         | minikube-local-cache-test:functional-431000                              |                      |         |         |                     |                     |
| cache   | functional-431000 cache delete                                           | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
|         | minikube-local-cache-test:functional-431000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
| ssh     | functional-431000 ssh sudo                                               | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-431000                                                        | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-431000 ssh                                                    | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-431000 cache reload                                           | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
| ssh     | functional-431000 ssh                                                    | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT | 29 Apr 24 04:44 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-431000 kubectl --                                             | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:44 PDT |                     |
|         | --context functional-431000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-431000                                                     | functional-431000    | jenkins | v1.33.0 | 29 Apr 24 04:45 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/04/29 04:45:00
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0429 04:45:00.676465    6843 out.go:291] Setting OutFile to fd 1 ...
I0429 04:45:00.676598    6843 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:45:00.676599    6843 out.go:304] Setting ErrFile to fd 2...
I0429 04:45:00.676601    6843 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:45:00.676732    6843 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
I0429 04:45:00.677813    6843 out.go:298] Setting JSON to false
I0429 04:45:00.693734    6843 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4471,"bootTime":1714386629,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0429 04:45:00.693799    6843 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0429 04:45:00.699223    6843 out.go:177] * [functional-431000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
I0429 04:45:00.714094    6843 out.go:177]   - MINIKUBE_LOCATION=18771
I0429 04:45:00.709142    6843 notify.go:220] Checking for updates...
I0429 04:45:00.721032    6843 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
I0429 04:45:00.728996    6843 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0429 04:45:00.737088    6843 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0429 04:45:00.745009    6843 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
I0429 04:45:00.753006    6843 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0429 04:45:00.756415    6843 config.go:182] Loaded profile config "functional-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 04:45:00.756472    6843 driver.go:392] Setting default libvirt URI to qemu:///system
I0429 04:45:00.761074    6843 out.go:177] * Using the qemu2 driver based on existing profile
I0429 04:45:00.768045    6843 start.go:297] selected driver: qemu2
I0429 04:45:00.768049    6843 start.go:901] validating driver "qemu2" against &{Name:functional-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:functional-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0429 04:45:00.768103    6843 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0429 04:45:00.770769    6843 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0429 04:45:00.770815    6843 cni.go:84] Creating CNI manager for ""
I0429 04:45:00.770822    6843 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0429 04:45:00.770879    6843 start.go:340] cluster config:
{Name:functional-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-431000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0429 04:45:00.775838    6843 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0429 04:45:00.784086    6843 out.go:177] * Starting "functional-431000" primary control-plane node in "functional-431000" cluster
I0429 04:45:00.789017    6843 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0429 04:45:00.789033    6843 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
I0429 04:45:00.789044    6843 cache.go:56] Caching tarball of preloaded images
I0429 04:45:00.789114    6843 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0429 04:45:00.789118    6843 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0429 04:45:00.789199    6843 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/functional-431000/config.json ...
I0429 04:45:00.789721    6843 start.go:360] acquireMachinesLock for functional-431000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0429 04:45:00.789760    6843 start.go:364] duration metric: took 33.792µs to acquireMachinesLock for "functional-431000"
I0429 04:45:00.789770    6843 start.go:96] Skipping create...Using existing machine configuration
I0429 04:45:00.789776    6843 fix.go:54] fixHost starting: 
I0429 04:45:00.789916    6843 fix.go:112] recreateIfNeeded on functional-431000: state=Stopped err=<nil>
W0429 04:45:00.789924    6843 fix.go:138] unexpected machine state, will restart: <nil>
I0429 04:45:00.801157    6843 out.go:177] * Restarting existing qemu2 VM for "functional-431000" ...
I0429 04:45:00.807158    6843 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:45:fb:82:db:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/disk.qcow2
I0429 04:45:00.809913    6843 main.go:141] libmachine: STDOUT: 
I0429 04:45:00.809944    6843 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0429 04:45:00.809983    6843 fix.go:56] duration metric: took 20.207416ms for fixHost
I0429 04:45:00.809986    6843 start.go:83] releasing machines lock for "functional-431000", held for 20.222375ms
W0429 04:45:00.809997    6843 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0429 04:45:00.810043    6843 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0429 04:45:00.810049    6843 start.go:728] Will try again in 5 seconds ...
I0429 04:45:05.812278    6843 start.go:360] acquireMachinesLock for functional-431000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0429 04:45:05.812826    6843 start.go:364] duration metric: took 443.542µs to acquireMachinesLock for "functional-431000"
I0429 04:45:05.813028    6843 start.go:96] Skipping create...Using existing machine configuration
I0429 04:45:05.813042    6843 fix.go:54] fixHost starting: 
I0429 04:45:05.813768    6843 fix.go:112] recreateIfNeeded on functional-431000: state=Stopped err=<nil>
W0429 04:45:05.813789    6843 fix.go:138] unexpected machine state, will restart: <nil>
I0429 04:45:05.822260    6843 out.go:177] * Restarting existing qemu2 VM for "functional-431000" ...
I0429 04:45:05.826497    6843 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:45:fb:82:db:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/functional-431000/disk.qcow2
I0429 04:45:05.836180    6843 main.go:141] libmachine: STDOUT: 
I0429 04:45:05.836227    6843 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0429 04:45:05.836313    6843 fix.go:56] duration metric: took 23.274ms for fixHost
I0429 04:45:05.836324    6843 start.go:83] releasing machines lock for "functional-431000", held for 23.42525ms
W0429 04:45:05.836532    6843 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-431000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0429 04:45:05.845276    6843 out.go:177] 
W0429 04:45:05.848333    6843 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0429 04:45:05.848361    6843 out.go:239] * 
W0429 04:45:05.850956    6843 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0429 04:45:05.859254    6843 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-431000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-431000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.010666ms)

                                                
                                                
** stderr ** 
	error: context "functional-431000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-431000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-431000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-431000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-431000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-431000 --alsologtostderr -v=1] stderr:
I0429 04:45:49.969598    7158 out.go:291] Setting OutFile to fd 1 ...
I0429 04:45:49.970011    7158 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:45:49.970015    7158 out.go:304] Setting ErrFile to fd 2...
I0429 04:45:49.970018    7158 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:45:49.970166    7158 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
I0429 04:45:49.970353    7158 mustload.go:65] Loading cluster: functional-431000
I0429 04:45:49.970545    7158 config.go:182] Loaded profile config "functional-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 04:45:49.975318    7158 out.go:177] * The control-plane node functional-431000 host is not running: state=Stopped
I0429 04:45:49.979238    7158 out.go:177]   To start a cluster, run: "minikube start -p functional-431000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000: exit status 7 (44.105458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 status: exit status 7 (32.306ms)

                                                
                                                
-- stdout --
	functional-431000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-431000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (31.72375ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-431000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 status -o json: exit status 7 (32.035333ms)

                                                
                                                
-- stdout --
	{"Name":"functional-431000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-431000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000: exit status 7 (31.450958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-431000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-431000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.462333ms)

                                                
                                                
** stderr ** 
	error: context "functional-431000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-431000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-431000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-431000 describe po hello-node-connect: exit status 1 (26.387792ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-431000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-431000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-431000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-431000 logs -l app=hello-node-connect: exit status 1 (26.394458ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-431000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-431000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-431000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-431000 describe svc hello-node-connect: exit status 1 (25.681833ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-431000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-431000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000: exit status 7 (32.5385ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-431000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000: exit status 7 (32.738292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "echo hello": exit status 83 (45.965833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-431000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-431000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-431000\"\n"*. args "out/minikube-darwin-arm64 -p functional-431000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "cat /etc/hostname": exit status 83 (49.024667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-431000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-431000"- but got *"* The control-plane node functional-431000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-431000\"\n"*. args "out/minikube-darwin-arm64 -p functional-431000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000: exit status 7 (36.525209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (57.786333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-431000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh -n functional-431000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh -n functional-431000 "sudo cat /home/docker/cp-test.txt": exit status 83 (42.944208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-431000 ssh -n functional-431000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-431000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-431000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 cp functional-431000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2611139841/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 cp functional-431000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2611139841/001/cp-test.txt: exit status 83 (42.246667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-431000 cp functional-431000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2611139841/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh -n functional-431000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh -n functional-431000 "sudo cat /home/docker/cp-test.txt": exit status 83 (44.105625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-431000 ssh -n functional-431000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2611139841/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-431000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-431000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (50.643833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-431000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh -n functional-431000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh -n functional-431000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (42.962334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-431000 ssh -n functional-431000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-431000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-431000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/6500/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "sudo cat /etc/test/nested/copy/6500/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "sudo cat /etc/test/nested/copy/6500/hosts": exit status 83 (50.212208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-431000 ssh "sudo cat /etc/test/nested/copy/6500/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-431000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-431000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-431000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-431000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000: exit status 7 (32.682042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/6500.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "sudo cat /etc/ssl/certs/6500.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "sudo cat /etc/ssl/certs/6500.pem": exit status 83 (43.047833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/6500.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-431000 ssh \"sudo cat /etc/ssl/certs/6500.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/6500.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-431000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-431000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/6500.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "sudo cat /usr/share/ca-certificates/6500.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "sudo cat /usr/share/ca-certificates/6500.pem": exit status 83 (42.808709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/6500.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-431000 ssh \"sudo cat /usr/share/ca-certificates/6500.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/6500.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-431000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-431000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (55.505ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-431000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-431000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-431000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/65002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "sudo cat /etc/ssl/certs/65002.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "sudo cat /etc/ssl/certs/65002.pem": exit status 83 (39.771083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/65002.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-431000 ssh \"sudo cat /etc/ssl/certs/65002.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/65002.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-431000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-431000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/65002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "sudo cat /usr/share/ca-certificates/65002.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "sudo cat /usr/share/ca-certificates/65002.pem": exit status 83 (46.757ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/65002.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-431000 ssh \"sudo cat /usr/share/ca-certificates/65002.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/65002.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-431000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-431000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (39.583792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-431000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-431000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-431000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000: exit status 7 (32.242375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-431000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-431000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.238417ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-431000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-431000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-431000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-431000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-431000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-431000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-431000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-431000 -n functional-431000: exit status 7 (32.288083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "sudo systemctl is-active crio": exit status 83 (41.504916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-431000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-431000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 version -o=json --components: exit status 83 (44.0325ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-431000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-431000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-431000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-431000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-431000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-431000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-431000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-431000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-431000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-431000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-431000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-431000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-431000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-431000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-431000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-431000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-431000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-431000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-431000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-431000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-431000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-431000 image ls --format short --alsologtostderr:
I0429 04:45:50.388315    7173 out.go:291] Setting OutFile to fd 1 ...
I0429 04:45:50.388473    7173 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:45:50.388481    7173 out.go:304] Setting ErrFile to fd 2...
I0429 04:45:50.388484    7173 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:45:50.388607    7173 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
I0429 04:45:50.389016    7173 config.go:182] Loaded profile config "functional-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 04:45:50.389077    7173 config.go:182] Loaded profile config "functional-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-431000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-431000 image ls --format table --alsologtostderr:
I0429 04:45:50.622482    7185 out.go:291] Setting OutFile to fd 1 ...
I0429 04:45:50.622637    7185 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:45:50.622640    7185 out.go:304] Setting ErrFile to fd 2...
I0429 04:45:50.622643    7185 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:45:50.622765    7185 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
I0429 04:45:50.623160    7185 config.go:182] Loaded profile config "functional-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 04:45:50.623219    7185 config.go:182] Loaded profile config "functional-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-431000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-431000 image ls --format json --alsologtostderr:
I0429 04:45:50.584125    7183 out.go:291] Setting OutFile to fd 1 ...
I0429 04:45:50.584283    7183 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:45:50.584286    7183 out.go:304] Setting ErrFile to fd 2...
I0429 04:45:50.584288    7183 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:45:50.584414    7183 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
I0429 04:45:50.584836    7183 config.go:182] Loaded profile config "functional-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 04:45:50.584910    7183 config.go:182] Loaded profile config "functional-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-431000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-431000 image ls --format yaml --alsologtostderr:
I0429 04:45:50.426104    7175 out.go:291] Setting OutFile to fd 1 ...
I0429 04:45:50.426223    7175 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:45:50.426226    7175 out.go:304] Setting ErrFile to fd 2...
I0429 04:45:50.426228    7175 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:45:50.426342    7175 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
I0429 04:45:50.426820    7175 config.go:182] Loaded profile config "functional-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 04:45:50.426878    7175 config.go:182] Loaded profile config "functional-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh pgrep buildkitd: exit status 83 (43.925ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 image build -t localhost/my-image:functional-431000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-431000 image build -t localhost/my-image:functional-431000 testdata/build --alsologtostderr:
I0429 04:45:50.507081    7179 out.go:291] Setting OutFile to fd 1 ...
I0429 04:45:50.507529    7179 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:45:50.507533    7179 out.go:304] Setting ErrFile to fd 2...
I0429 04:45:50.507535    7179 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:45:50.507705    7179 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
I0429 04:45:50.508125    7179 config.go:182] Loaded profile config "functional-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 04:45:50.508573    7179 config.go:182] Loaded profile config "functional-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 04:45:50.508795    7179 build_images.go:133] succeeded building to: 
I0429 04:45:50.508798    7179 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 image ls
functional_test.go:442: expected "localhost/my-image:functional-431000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-431000 docker-env) && out/minikube-darwin-arm64 status -p functional-431000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-431000 docker-env) && out/minikube-darwin-arm64 status -p functional-431000": exit status 1 (45.839625ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 update-context --alsologtostderr -v=2: exit status 83 (44.707375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:45:50.254408    7167 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:45:50.255038    7167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:45:50.255043    7167 out.go:304] Setting ErrFile to fd 2...
	I0429 04:45:50.255045    7167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:45:50.255194    7167 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:45:50.255395    7167 mustload.go:65] Loading cluster: functional-431000
	I0429 04:45:50.255589    7167 config.go:182] Loaded profile config "functional-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:45:50.259562    7167 out.go:177] * The control-plane node functional-431000 host is not running: state=Stopped
	I0429 04:45:50.263694    7167 out.go:177]   To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-431000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-431000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-431000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 update-context --alsologtostderr -v=2: exit status 83 (44.821625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:45:50.343698    7171 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:45:50.343829    7171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:45:50.343836    7171 out.go:304] Setting ErrFile to fd 2...
	I0429 04:45:50.343838    7171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:45:50.343972    7171 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:45:50.344194    7171 mustload.go:65] Loading cluster: functional-431000
	I0429 04:45:50.344389    7171 config.go:182] Loaded profile config "functional-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:45:50.348555    7171 out.go:177] * The control-plane node functional-431000 host is not running: state=Stopped
	I0429 04:45:50.352729    7171 out.go:177]   To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-431000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-431000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-431000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 update-context --alsologtostderr -v=2: exit status 83 (43.531125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:45:50.299002    7169 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:45:50.299142    7169 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:45:50.299145    7169 out.go:304] Setting ErrFile to fd 2...
	I0429 04:45:50.299147    7169 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:45:50.299271    7169 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:45:50.299484    7169 mustload.go:65] Loading cluster: functional-431000
	I0429 04:45:50.299701    7169 config.go:182] Loaded profile config "functional-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:45:50.303756    7169 out.go:177] * The control-plane node functional-431000 host is not running: state=Stopped
	I0429 04:45:50.307769    7169 out.go:177]   To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-431000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-431000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-431000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-431000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-431000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (25.872625ms)

                                                
                                                
** stderr ** 
	error: context "functional-431000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-431000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 service list: exit status 83 (49.749167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-431000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-431000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-431000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 service list -o json: exit status 83 (48.842667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-431000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 service --namespace=default --https --url hello-node: exit status 83 (45.748833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-431000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 service hello-node --url --format={{.IP}}: exit status 83 (48.829916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-431000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-431000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-431000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 service hello-node --url: exit status 83 (43.915417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-431000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-431000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-431000"
functional_test.go:1565: failed to parse "* The control-plane node functional-431000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-431000\"": parse "* The control-plane node functional-431000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-431000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-431000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-431000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0429 04:45:07.794762    6963 out.go:291] Setting OutFile to fd 1 ...
I0429 04:45:07.794940    6963 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:45:07.794947    6963 out.go:304] Setting ErrFile to fd 2...
I0429 04:45:07.794949    6963 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:45:07.795122    6963 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
I0429 04:45:07.795373    6963 mustload.go:65] Loading cluster: functional-431000
I0429 04:45:07.795611    6963 config.go:182] Loaded profile config "functional-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 04:45:07.801011    6963 out.go:177] * The control-plane node functional-431000 host is not running: state=Stopped
I0429 04:45:07.803961    6963 out.go:177]   To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
stdout: * The control-plane node functional-431000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-431000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-431000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-431000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-431000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-431000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 6962: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-431000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-431000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-431000": client config: context "functional-431000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (96.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-431000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-431000 get svc nginx-svc: exit status 1 (69.73ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-431000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-431000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (96.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 image load --daemon gcr.io/google-containers/addon-resizer:functional-431000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-431000 image load --daemon gcr.io/google-containers/addon-resizer:functional-431000 --alsologtostderr: (1.283242917s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-431000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 image load --daemon gcr.io/google-containers/addon-resizer:functional-431000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-431000 image load --daemon gcr.io/google-containers/addon-resizer:functional-431000 --alsologtostderr: (1.313729958s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-431000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.282371125s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-431000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 image load --daemon gcr.io/google-containers/addon-resizer:functional-431000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-431000 image load --daemon gcr.io/google-containers/addon-resizer:functional-431000 --alsologtostderr: (1.16254675s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-431000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 image save gcr.io/google-containers/addon-resizer:functional-431000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-431000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.03329475s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 12 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (25.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (25.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-742000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-742000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (10.032245708s)

                                                
                                                
-- stdout --
	* [ha-742000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-742000" primary control-plane node in "ha-742000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-742000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:47:36.139786    7222 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:47:36.139942    7222 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:47:36.139945    7222 out.go:304] Setting ErrFile to fd 2...
	I0429 04:47:36.139947    7222 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:47:36.140073    7222 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:47:36.141172    7222 out.go:298] Setting JSON to false
	I0429 04:47:36.157368    7222 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4627,"bootTime":1714386629,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:47:36.157422    7222 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:47:36.163631    7222 out.go:177] * [ha-742000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:47:36.167553    7222 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 04:47:36.171602    7222 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:47:36.167655    7222 notify.go:220] Checking for updates...
	I0429 04:47:36.178572    7222 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:47:36.181653    7222 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:47:36.184564    7222 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 04:47:36.187610    7222 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:47:36.190837    7222 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:47:36.194592    7222 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 04:47:36.201549    7222 start.go:297] selected driver: qemu2
	I0429 04:47:36.201554    7222 start.go:901] validating driver "qemu2" against <nil>
	I0429 04:47:36.201563    7222 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:47:36.203814    7222 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 04:47:36.206589    7222 out.go:177] * Automatically selected the socket_vmnet network
	I0429 04:47:36.209705    7222 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 04:47:36.209739    7222 cni.go:84] Creating CNI manager for ""
	I0429 04:47:36.209745    7222 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 04:47:36.209754    7222 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 04:47:36.209787    7222 start.go:340] cluster config:
	{Name:ha-742000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-742000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:47:36.214528    7222 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:47:36.222568    7222 out.go:177] * Starting "ha-742000" primary control-plane node in "ha-742000" cluster
	I0429 04:47:36.226650    7222 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 04:47:36.226668    7222 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 04:47:36.226677    7222 cache.go:56] Caching tarball of preloaded images
	I0429 04:47:36.226746    7222 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 04:47:36.226752    7222 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 04:47:36.226955    7222 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/ha-742000/config.json ...
	I0429 04:47:36.226967    7222 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/ha-742000/config.json: {Name:mkccb81da59f187cb10ac6d6b951a766f1f96c40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 04:47:36.227167    7222 start.go:360] acquireMachinesLock for ha-742000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:47:36.227202    7222 start.go:364] duration metric: took 29µs to acquireMachinesLock for "ha-742000"
	I0429 04:47:36.227215    7222 start.go:93] Provisioning new machine with config: &{Name:ha-742000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.0 ClusterName:ha-742000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 04:47:36.227247    7222 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 04:47:36.230630    7222 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 04:47:36.247371    7222 start.go:159] libmachine.API.Create for "ha-742000" (driver="qemu2")
	I0429 04:47:36.247398    7222 client.go:168] LocalClient.Create starting
	I0429 04:47:36.247483    7222 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 04:47:36.247517    7222 main.go:141] libmachine: Decoding PEM data...
	I0429 04:47:36.247527    7222 main.go:141] libmachine: Parsing certificate...
	I0429 04:47:36.247564    7222 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 04:47:36.247588    7222 main.go:141] libmachine: Decoding PEM data...
	I0429 04:47:36.247594    7222 main.go:141] libmachine: Parsing certificate...
	I0429 04:47:36.247943    7222 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 04:47:36.389074    7222 main.go:141] libmachine: Creating SSH key...
	I0429 04:47:36.632334    7222 main.go:141] libmachine: Creating Disk image...
	I0429 04:47:36.632344    7222 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 04:47:36.632546    7222 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/disk.qcow2
	I0429 04:47:36.645331    7222 main.go:141] libmachine: STDOUT: 
	I0429 04:47:36.645354    7222 main.go:141] libmachine: STDERR: 
	I0429 04:47:36.645415    7222 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/disk.qcow2 +20000M
	I0429 04:47:36.656385    7222 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 04:47:36.656406    7222 main.go:141] libmachine: STDERR: 
	I0429 04:47:36.656425    7222 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/disk.qcow2
	I0429 04:47:36.656433    7222 main.go:141] libmachine: Starting QEMU VM...
	I0429 04:47:36.656470    7222 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:fc:37:7d:32:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/disk.qcow2
	I0429 04:47:36.658203    7222 main.go:141] libmachine: STDOUT: 
	I0429 04:47:36.658220    7222 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:47:36.658238    7222 client.go:171] duration metric: took 410.8385ms to LocalClient.Create
	I0429 04:47:38.660406    7222 start.go:128] duration metric: took 2.43315775s to createHost
	I0429 04:47:38.660486    7222 start.go:83] releasing machines lock for "ha-742000", held for 2.433293375s
	W0429 04:47:38.660566    7222 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:47:38.666815    7222 out.go:177] * Deleting "ha-742000" in qemu2 ...
	W0429 04:47:38.696126    7222 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:47:38.696152    7222 start.go:728] Will try again in 5 seconds ...
	I0429 04:47:43.698313    7222 start.go:360] acquireMachinesLock for ha-742000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:47:43.698725    7222 start.go:364] duration metric: took 339.833µs to acquireMachinesLock for "ha-742000"
	I0429 04:47:43.698840    7222 start.go:93] Provisioning new machine with config: &{Name:ha-742000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.0 ClusterName:ha-742000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 04:47:43.699142    7222 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 04:47:43.707779    7222 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 04:47:43.757415    7222 start.go:159] libmachine.API.Create for "ha-742000" (driver="qemu2")
	I0429 04:47:43.757464    7222 client.go:168] LocalClient.Create starting
	I0429 04:47:43.757600    7222 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 04:47:43.757666    7222 main.go:141] libmachine: Decoding PEM data...
	I0429 04:47:43.757688    7222 main.go:141] libmachine: Parsing certificate...
	I0429 04:47:43.757752    7222 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 04:47:43.757793    7222 main.go:141] libmachine: Decoding PEM data...
	I0429 04:47:43.757806    7222 main.go:141] libmachine: Parsing certificate...
	I0429 04:47:43.758564    7222 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 04:47:43.911372    7222 main.go:141] libmachine: Creating SSH key...
	I0429 04:47:44.068404    7222 main.go:141] libmachine: Creating Disk image...
	I0429 04:47:44.068410    7222 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 04:47:44.068611    7222 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/disk.qcow2
	I0429 04:47:44.081410    7222 main.go:141] libmachine: STDOUT: 
	I0429 04:47:44.081429    7222 main.go:141] libmachine: STDERR: 
	I0429 04:47:44.081490    7222 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/disk.qcow2 +20000M
	I0429 04:47:44.092328    7222 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 04:47:44.092354    7222 main.go:141] libmachine: STDERR: 
	I0429 04:47:44.092369    7222 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/disk.qcow2
	I0429 04:47:44.092374    7222 main.go:141] libmachine: Starting QEMU VM...
	I0429 04:47:44.092410    7222 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:44:02:06:83:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/disk.qcow2
	I0429 04:47:44.094189    7222 main.go:141] libmachine: STDOUT: 
	I0429 04:47:44.094205    7222 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:47:44.094217    7222 client.go:171] duration metric: took 336.749875ms to LocalClient.Create
	I0429 04:47:46.096377    7222 start.go:128] duration metric: took 2.397224958s to createHost
	I0429 04:47:46.096504    7222 start.go:83] releasing machines lock for "ha-742000", held for 2.397733833s
	W0429 04:47:46.096971    7222 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-742000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-742000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:47:46.109540    7222 out.go:177] 
	W0429 04:47:46.113622    7222 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:47:46.113645    7222 out.go:239] * 
	* 
	W0429 04:47:46.116236    7222 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 04:47:46.126448    7222 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-742000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000: exit status 7 (69.597917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-742000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (68.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-742000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-742000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (63.02425ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-742000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-742000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-742000 -- rollout status deployment/busybox: exit status 1 (59.258958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-742000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.923583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-742000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.116959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-742000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.956833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-742000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.77975ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-742000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.0545ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-742000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.698708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-742000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.960834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-742000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.721583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-742000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.992084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-742000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.776ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-742000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (59.341417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-742000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-742000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-742000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.63925ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-742000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-742000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-742000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.570792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-742000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-742000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-742000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.736042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-742000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000: exit status 7 (32.479667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-742000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (68.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-742000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.462416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-742000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000: exit status 7 (32.799208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-742000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-742000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-742000 -v=7 --alsologtostderr: exit status 83 (44.909917ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-742000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-742000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:48:55.079592    7302 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:48:55.080154    7302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:48:55.080158    7302 out.go:304] Setting ErrFile to fd 2...
	I0429 04:48:55.080160    7302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:48:55.080335    7302 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:48:55.080571    7302 mustload.go:65] Loading cluster: ha-742000
	I0429 04:48:55.080765    7302 config.go:182] Loaded profile config "ha-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:48:55.084371    7302 out.go:177] * The control-plane node ha-742000 host is not running: state=Stopped
	I0429 04:48:55.089112    7302 out.go:177]   To start a cluster, run: "minikube start -p ha-742000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-742000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000: exit status 7 (32.009958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-742000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-742000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-742000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.442292ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-742000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-742000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-742000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000: exit status 7 (32.189791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-742000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-742000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-742000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-742000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-742000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-742000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-742000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-742000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-742000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000: exit status 7 (31.912ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-742000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-742000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-742000 status --output json -v=7 --alsologtostderr: exit status 7 (32.203333ms)

                                                
                                                
-- stdout --
	{"Name":"ha-742000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:48:55.321228    7315 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:48:55.321399    7315 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:48:55.321403    7315 out.go:304] Setting ErrFile to fd 2...
	I0429 04:48:55.321405    7315 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:48:55.321525    7315 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:48:55.321640    7315 out.go:298] Setting JSON to true
	I0429 04:48:55.321650    7315 mustload.go:65] Loading cluster: ha-742000
	I0429 04:48:55.321719    7315 notify.go:220] Checking for updates...
	I0429 04:48:55.321863    7315 config.go:182] Loaded profile config "ha-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:48:55.321869    7315 status.go:255] checking status of ha-742000 ...
	I0429 04:48:55.322079    7315 status.go:330] ha-742000 host status = "Stopped" (err=<nil>)
	I0429 04:48:55.322084    7315 status.go:343] host is not running, skipping remaining checks
	I0429 04:48:55.322086    7315 status.go:257] ha-742000 status: &{Name:ha-742000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-742000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000: exit status 7 (31.887042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-742000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-742000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-742000 node stop m02 -v=7 --alsologtostderr: exit status 85 (48.505542ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:48:55.386080    7319 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:48:55.386328    7319 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:48:55.386331    7319 out.go:304] Setting ErrFile to fd 2...
	I0429 04:48:55.386333    7319 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:48:55.386450    7319 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:48:55.386696    7319 mustload.go:65] Loading cluster: ha-742000
	I0429 04:48:55.386885    7319 config.go:182] Loaded profile config "ha-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:48:55.390161    7319 out.go:177] 
	W0429 04:48:55.393051    7319 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0429 04:48:55.393055    7319 out.go:239] * 
	* 
	W0429 04:48:55.394870    7319 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 04:48:55.398993    7319 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-742000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr: exit status 7 (32.29575ms)

                                                
                                                
-- stdout --
	ha-742000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:48:55.434552    7321 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:48:55.434738    7321 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:48:55.434741    7321 out.go:304] Setting ErrFile to fd 2...
	I0429 04:48:55.434743    7321 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:48:55.434897    7321 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:48:55.435029    7321 out.go:298] Setting JSON to false
	I0429 04:48:55.435040    7321 mustload.go:65] Loading cluster: ha-742000
	I0429 04:48:55.435112    7321 notify.go:220] Checking for updates...
	I0429 04:48:55.435251    7321 config.go:182] Loaded profile config "ha-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:48:55.435257    7321 status.go:255] checking status of ha-742000 ...
	I0429 04:48:55.435459    7321 status.go:330] ha-742000 host status = "Stopped" (err=<nil>)
	I0429 04:48:55.435463    7321 status.go:343] host is not running, skipping remaining checks
	I0429 04:48:55.435465    7321 status.go:257] ha-742000 status: &{Name:ha-742000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr": ha-742000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr": ha-742000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr": ha-742000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr": ha-742000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000: exit status 7 (32.466667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-742000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-742000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-742000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-742000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-742000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000: exit status 7 (32.847167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-742000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (46.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-742000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-742000 node start m02 -v=7 --alsologtostderr: exit status 85 (48.714667ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:48:55.603551    7331 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:48:55.603791    7331 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:48:55.603794    7331 out.go:304] Setting ErrFile to fd 2...
	I0429 04:48:55.603796    7331 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:48:55.603920    7331 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:48:55.604148    7331 mustload.go:65] Loading cluster: ha-742000
	I0429 04:48:55.604339    7331 config.go:182] Loaded profile config "ha-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:48:55.607758    7331 out.go:177] 
	W0429 04:48:55.610828    7331 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0429 04:48:55.610833    7331 out.go:239] * 
	* 
	W0429 04:48:55.612648    7331 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 04:48:55.616751    7331 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0429 04:48:55.603551    7331 out.go:291] Setting OutFile to fd 1 ...
I0429 04:48:55.603791    7331 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:48:55.603794    7331 out.go:304] Setting ErrFile to fd 2...
I0429 04:48:55.603796    7331 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:48:55.603920    7331 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
I0429 04:48:55.604148    7331 mustload.go:65] Loading cluster: ha-742000
I0429 04:48:55.604339    7331 config.go:182] Loaded profile config "ha-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 04:48:55.607758    7331 out.go:177] 
W0429 04:48:55.610828    7331 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0429 04:48:55.610833    7331 out.go:239] * 
* 
W0429 04:48:55.612648    7331 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0429 04:48:55.616751    7331 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-742000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr: exit status 7 (32.703292ms)

                                                
                                                
-- stdout --
	ha-742000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:48:55.652782    7333 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:48:55.652949    7333 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:48:55.652952    7333 out.go:304] Setting ErrFile to fd 2...
	I0429 04:48:55.652955    7333 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:48:55.653086    7333 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:48:55.653209    7333 out.go:298] Setting JSON to false
	I0429 04:48:55.653220    7333 mustload.go:65] Loading cluster: ha-742000
	I0429 04:48:55.653277    7333 notify.go:220] Checking for updates...
	I0429 04:48:55.653429    7333 config.go:182] Loaded profile config "ha-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:48:55.653435    7333 status.go:255] checking status of ha-742000 ...
	I0429 04:48:55.653626    7333 status.go:330] ha-742000 host status = "Stopped" (err=<nil>)
	I0429 04:48:55.653630    7333 status.go:343] host is not running, skipping remaining checks
	I0429 04:48:55.653632    7333 status.go:257] ha-742000 status: &{Name:ha-742000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr: exit status 7 (77.279166ms)

                                                
                                                
-- stdout --
	ha-742000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:48:56.650661    7335 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:48:56.650864    7335 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:48:56.650869    7335 out.go:304] Setting ErrFile to fd 2...
	I0429 04:48:56.650872    7335 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:48:56.651033    7335 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:48:56.651184    7335 out.go:298] Setting JSON to false
	I0429 04:48:56.651197    7335 mustload.go:65] Loading cluster: ha-742000
	I0429 04:48:56.651241    7335 notify.go:220] Checking for updates...
	I0429 04:48:56.651421    7335 config.go:182] Loaded profile config "ha-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:48:56.651427    7335 status.go:255] checking status of ha-742000 ...
	I0429 04:48:56.651689    7335 status.go:330] ha-742000 host status = "Stopped" (err=<nil>)
	I0429 04:48:56.651694    7335 status.go:343] host is not running, skipping remaining checks
	I0429 04:48:56.651697    7335 status.go:257] ha-742000 status: &{Name:ha-742000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr: exit status 7 (75.895041ms)

                                                
                                                
-- stdout --
	ha-742000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:48:57.768678    7337 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:48:57.768865    7337 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:48:57.768869    7337 out.go:304] Setting ErrFile to fd 2...
	I0429 04:48:57.768872    7337 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:48:57.769045    7337 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:48:57.769196    7337 out.go:298] Setting JSON to false
	I0429 04:48:57.769210    7337 mustload.go:65] Loading cluster: ha-742000
	I0429 04:48:57.769255    7337 notify.go:220] Checking for updates...
	I0429 04:48:57.769465    7337 config.go:182] Loaded profile config "ha-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:48:57.769471    7337 status.go:255] checking status of ha-742000 ...
	I0429 04:48:57.769735    7337 status.go:330] ha-742000 host status = "Stopped" (err=<nil>)
	I0429 04:48:57.769739    7337 status.go:343] host is not running, skipping remaining checks
	I0429 04:48:57.769742    7337 status.go:257] ha-742000 status: &{Name:ha-742000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr: exit status 7 (76.235125ms)

                                                
                                                
-- stdout --
	ha-742000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:49:00.268360    7339 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:49:00.268545    7339 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:00.268549    7339 out.go:304] Setting ErrFile to fd 2...
	I0429 04:49:00.268552    7339 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:00.268744    7339 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:49:00.268911    7339 out.go:298] Setting JSON to false
	I0429 04:49:00.268932    7339 mustload.go:65] Loading cluster: ha-742000
	I0429 04:49:00.268957    7339 notify.go:220] Checking for updates...
	I0429 04:49:00.269176    7339 config.go:182] Loaded profile config "ha-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:49:00.269183    7339 status.go:255] checking status of ha-742000 ...
	I0429 04:49:00.269445    7339 status.go:330] ha-742000 host status = "Stopped" (err=<nil>)
	I0429 04:49:00.269450    7339 status.go:343] host is not running, skipping remaining checks
	I0429 04:49:00.269453    7339 status.go:257] ha-742000 status: &{Name:ha-742000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr: exit status 7 (73.499958ms)

                                                
                                                
-- stdout --
	ha-742000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:49:03.316762    7344 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:49:03.316994    7344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:03.316997    7344 out.go:304] Setting ErrFile to fd 2...
	I0429 04:49:03.317001    7344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:03.317187    7344 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:49:03.317350    7344 out.go:298] Setting JSON to false
	I0429 04:49:03.317365    7344 mustload.go:65] Loading cluster: ha-742000
	I0429 04:49:03.317409    7344 notify.go:220] Checking for updates...
	I0429 04:49:03.317621    7344 config.go:182] Loaded profile config "ha-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:49:03.317629    7344 status.go:255] checking status of ha-742000 ...
	I0429 04:49:03.317888    7344 status.go:330] ha-742000 host status = "Stopped" (err=<nil>)
	I0429 04:49:03.317893    7344 status.go:343] host is not running, skipping remaining checks
	I0429 04:49:03.317896    7344 status.go:257] ha-742000 status: &{Name:ha-742000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr: exit status 7 (74.750875ms)

                                                
                                                
-- stdout --
	ha-742000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:49:09.814509    7346 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:49:09.814695    7346 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:09.814699    7346 out.go:304] Setting ErrFile to fd 2...
	I0429 04:49:09.814709    7346 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:09.814874    7346 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:49:09.815089    7346 out.go:298] Setting JSON to false
	I0429 04:49:09.815103    7346 mustload.go:65] Loading cluster: ha-742000
	I0429 04:49:09.815141    7346 notify.go:220] Checking for updates...
	I0429 04:49:09.815356    7346 config.go:182] Loaded profile config "ha-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:49:09.815364    7346 status.go:255] checking status of ha-742000 ...
	I0429 04:49:09.815643    7346 status.go:330] ha-742000 host status = "Stopped" (err=<nil>)
	I0429 04:49:09.815648    7346 status.go:343] host is not running, skipping remaining checks
	I0429 04:49:09.815651    7346 status.go:257] ha-742000 status: &{Name:ha-742000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr: exit status 7 (76.916333ms)

                                                
                                                
-- stdout --
	ha-742000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:49:16.445848    7351 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:49:16.446029    7351 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:16.446033    7351 out.go:304] Setting ErrFile to fd 2...
	I0429 04:49:16.446036    7351 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:16.446210    7351 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:49:16.446386    7351 out.go:298] Setting JSON to false
	I0429 04:49:16.446399    7351 mustload.go:65] Loading cluster: ha-742000
	I0429 04:49:16.446433    7351 notify.go:220] Checking for updates...
	I0429 04:49:16.446633    7351 config.go:182] Loaded profile config "ha-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:49:16.446639    7351 status.go:255] checking status of ha-742000 ...
	I0429 04:49:16.446911    7351 status.go:330] ha-742000 host status = "Stopped" (err=<nil>)
	I0429 04:49:16.446915    7351 status.go:343] host is not running, skipping remaining checks
	I0429 04:49:16.446918    7351 status.go:257] ha-742000 status: &{Name:ha-742000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr: exit status 7 (78.907291ms)

                                                
                                                
-- stdout --
	ha-742000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:49:32.966941    7353 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:49:32.967163    7353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:32.967168    7353 out.go:304] Setting ErrFile to fd 2...
	I0429 04:49:32.967176    7353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:32.967338    7353 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:49:32.967483    7353 out.go:298] Setting JSON to false
	I0429 04:49:32.967497    7353 mustload.go:65] Loading cluster: ha-742000
	I0429 04:49:32.967539    7353 notify.go:220] Checking for updates...
	I0429 04:49:32.967749    7353 config.go:182] Loaded profile config "ha-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:49:32.967756    7353 status.go:255] checking status of ha-742000 ...
	I0429 04:49:32.968022    7353 status.go:330] ha-742000 host status = "Stopped" (err=<nil>)
	I0429 04:49:32.968026    7353 status.go:343] host is not running, skipping remaining checks
	I0429 04:49:32.968029    7353 status.go:257] ha-742000 status: &{Name:ha-742000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr: exit status 7 (76.356708ms)

                                                
                                                
-- stdout --
	ha-742000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:49:41.722166    7358 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:49:41.722362    7358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:41.722367    7358 out.go:304] Setting ErrFile to fd 2...
	I0429 04:49:41.722370    7358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:41.722522    7358 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:49:41.722676    7358 out.go:298] Setting JSON to false
	I0429 04:49:41.722689    7358 mustload.go:65] Loading cluster: ha-742000
	I0429 04:49:41.722717    7358 notify.go:220] Checking for updates...
	I0429 04:49:41.722924    7358 config.go:182] Loaded profile config "ha-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:49:41.722931    7358 status.go:255] checking status of ha-742000 ...
	I0429 04:49:41.723169    7358 status.go:330] ha-742000 host status = "Stopped" (err=<nil>)
	I0429 04:49:41.723173    7358 status.go:343] host is not running, skipping remaining checks
	I0429 04:49:41.723176    7358 status.go:257] ha-742000 status: &{Name:ha-742000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000: exit status 7 (34.235625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-742000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (46.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-742000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-742000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-742000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-742000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-742000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-742000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-742000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-742000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000: exit status 7 (32.1715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-742000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-742000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-742000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-742000 -v=7 --alsologtostderr: (2.029784583s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-742000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-742000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.218701166s)

                                                
                                                
-- stdout --
	* [ha-742000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-742000" primary control-plane node in "ha-742000" cluster
	* Restarting existing qemu2 VM for "ha-742000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-742000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:49:43.993745    7382 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:49:43.993901    7382 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:43.993905    7382 out.go:304] Setting ErrFile to fd 2...
	I0429 04:49:43.993908    7382 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:43.994068    7382 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:49:43.995194    7382 out.go:298] Setting JSON to false
	I0429 04:49:44.013969    7382 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4755,"bootTime":1714386629,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:49:44.014040    7382 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:49:44.019229    7382 out.go:177] * [ha-742000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:49:44.027121    7382 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 04:49:44.027170    7382 notify.go:220] Checking for updates...
	I0429 04:49:44.031188    7382 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:49:44.034091    7382 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:49:44.037182    7382 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:49:44.040177    7382 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 04:49:44.041512    7382 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:49:44.044465    7382 config.go:182] Loaded profile config "ha-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:49:44.044515    7382 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:49:44.049245    7382 out.go:177] * Using the qemu2 driver based on existing profile
	I0429 04:49:44.056163    7382 start.go:297] selected driver: qemu2
	I0429 04:49:44.056171    7382 start.go:901] validating driver "qemu2" against &{Name:ha-742000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.0 ClusterName:ha-742000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:49:44.056221    7382 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:49:44.058438    7382 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 04:49:44.058486    7382 cni.go:84] Creating CNI manager for ""
	I0429 04:49:44.058491    7382 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 04:49:44.058542    7382 start.go:340] cluster config:
	{Name:ha-742000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-742000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:49:44.062860    7382 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:49:44.070133    7382 out.go:177] * Starting "ha-742000" primary control-plane node in "ha-742000" cluster
	I0429 04:49:44.074185    7382 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 04:49:44.074201    7382 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 04:49:44.074209    7382 cache.go:56] Caching tarball of preloaded images
	I0429 04:49:44.074272    7382 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 04:49:44.074278    7382 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 04:49:44.074328    7382 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/ha-742000/config.json ...
	I0429 04:49:44.074801    7382 start.go:360] acquireMachinesLock for ha-742000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:49:44.074835    7382 start.go:364] duration metric: took 27.875µs to acquireMachinesLock for "ha-742000"
	I0429 04:49:44.074845    7382 start.go:96] Skipping create...Using existing machine configuration
	I0429 04:49:44.074850    7382 fix.go:54] fixHost starting: 
	I0429 04:49:44.074958    7382 fix.go:112] recreateIfNeeded on ha-742000: state=Stopped err=<nil>
	W0429 04:49:44.074966    7382 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 04:49:44.083129    7382 out.go:177] * Restarting existing qemu2 VM for "ha-742000" ...
	I0429 04:49:44.087195    7382 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:44:02:06:83:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/disk.qcow2
	I0429 04:49:44.089311    7382 main.go:141] libmachine: STDOUT: 
	I0429 04:49:44.089336    7382 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:49:44.089365    7382 fix.go:56] duration metric: took 14.514375ms for fixHost
	I0429 04:49:44.089369    7382 start.go:83] releasing machines lock for "ha-742000", held for 14.529666ms
	W0429 04:49:44.089377    7382 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:49:44.089409    7382 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:49:44.089414    7382 start.go:728] Will try again in 5 seconds ...
	I0429 04:49:49.091656    7382 start.go:360] acquireMachinesLock for ha-742000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:49:49.092122    7382 start.go:364] duration metric: took 370µs to acquireMachinesLock for "ha-742000"
	I0429 04:49:49.092261    7382 start.go:96] Skipping create...Using existing machine configuration
	I0429 04:49:49.092284    7382 fix.go:54] fixHost starting: 
	I0429 04:49:49.093054    7382 fix.go:112] recreateIfNeeded on ha-742000: state=Stopped err=<nil>
	W0429 04:49:49.093079    7382 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 04:49:49.097728    7382 out.go:177] * Restarting existing qemu2 VM for "ha-742000" ...
	I0429 04:49:49.100979    7382 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:44:02:06:83:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/disk.qcow2
	I0429 04:49:49.110843    7382 main.go:141] libmachine: STDOUT: 
	I0429 04:49:49.110902    7382 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:49:49.110960    7382 fix.go:56] duration metric: took 18.680166ms for fixHost
	I0429 04:49:49.110976    7382 start.go:83] releasing machines lock for "ha-742000", held for 18.83475ms
	W0429 04:49:49.111137    7382 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-742000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-742000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:49:49.118548    7382 out.go:177] 
	W0429 04:49:49.121503    7382 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:49:49.121544    7382 out.go:239] * 
	* 
	W0429 04:49:49.123914    7382 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 04:49:49.132502    7382 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-742000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-742000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000: exit status 7 (34.356167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-742000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-742000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-742000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.331042ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-742000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-742000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:49:49.283972    7394 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:49:49.284395    7394 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:49.284399    7394 out.go:304] Setting ErrFile to fd 2...
	I0429 04:49:49.284402    7394 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:49.284542    7394 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:49:49.284754    7394 mustload.go:65] Loading cluster: ha-742000
	I0429 04:49:49.284923    7394 config.go:182] Loaded profile config "ha-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:49:49.288596    7394 out.go:177] * The control-plane node ha-742000 host is not running: state=Stopped
	I0429 04:49:49.291325    7394 out.go:177]   To start a cluster, run: "minikube start -p ha-742000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-742000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr: exit status 7 (30.823625ms)

                                                
                                                
-- stdout --
	ha-742000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:49:49.324354    7396 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:49:49.324480    7396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:49.324486    7396 out.go:304] Setting ErrFile to fd 2...
	I0429 04:49:49.324488    7396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:49.324607    7396 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:49:49.324731    7396 out.go:298] Setting JSON to false
	I0429 04:49:49.324742    7396 mustload.go:65] Loading cluster: ha-742000
	I0429 04:49:49.324796    7396 notify.go:220] Checking for updates...
	I0429 04:49:49.324941    7396 config.go:182] Loaded profile config "ha-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:49:49.324947    7396 status.go:255] checking status of ha-742000 ...
	I0429 04:49:49.325179    7396 status.go:330] ha-742000 host status = "Stopped" (err=<nil>)
	I0429 04:49:49.325182    7396 status.go:343] host is not running, skipping remaining checks
	I0429 04:49:49.325185    7396 status.go:257] ha-742000 status: &{Name:ha-742000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000: exit status 7 (30.760083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-742000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-742000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-742000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-742000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-742000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000: exit status 7 (31.835625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-742000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-742000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-742000 stop -v=7 --alsologtostderr: (3.869279792s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr: exit status 7 (68.87275ms)

                                                
                                                
-- stdout --
	ha-742000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:49:53.397377    7426 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:49:53.397586    7426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:53.397590    7426 out.go:304] Setting ErrFile to fd 2...
	I0429 04:49:53.397593    7426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:53.397751    7426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:49:53.397904    7426 out.go:298] Setting JSON to false
	I0429 04:49:53.397917    7426 mustload.go:65] Loading cluster: ha-742000
	I0429 04:49:53.397949    7426 notify.go:220] Checking for updates...
	I0429 04:49:53.398182    7426 config.go:182] Loaded profile config "ha-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:49:53.398188    7426 status.go:255] checking status of ha-742000 ...
	I0429 04:49:53.398441    7426 status.go:330] ha-742000 host status = "Stopped" (err=<nil>)
	I0429 04:49:53.398445    7426 status.go:343] host is not running, skipping remaining checks
	I0429 04:49:53.398448    7426 status.go:257] ha-742000 status: &{Name:ha-742000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr": ha-742000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr": ha-742000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-742000 status -v=7 --alsologtostderr": ha-742000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000: exit status 7 (34.065541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-742000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-742000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-742000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.193160083s)

                                                
                                                
-- stdout --
	* [ha-742000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-742000" primary control-plane node in "ha-742000" cluster
	* Restarting existing qemu2 VM for "ha-742000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-742000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:49:53.463782    7430 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:49:53.463906    7430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:53.463909    7430 out.go:304] Setting ErrFile to fd 2...
	I0429 04:49:53.463911    7430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:53.464047    7430 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:49:53.465160    7430 out.go:298] Setting JSON to false
	I0429 04:49:53.481155    7430 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4764,"bootTime":1714386629,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:49:53.481223    7430 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:49:53.486670    7430 out.go:177] * [ha-742000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:49:53.495527    7430 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 04:49:53.495606    7430 notify.go:220] Checking for updates...
	I0429 04:49:53.499572    7430 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:49:53.502479    7430 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:49:53.505484    7430 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:49:53.512427    7430 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 04:49:53.516495    7430 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:49:53.519810    7430 config.go:182] Loaded profile config "ha-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:49:53.520068    7430 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:49:53.524472    7430 out.go:177] * Using the qemu2 driver based on existing profile
	I0429 04:49:53.531489    7430 start.go:297] selected driver: qemu2
	I0429 04:49:53.531495    7430 start.go:901] validating driver "qemu2" against &{Name:ha-742000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.0 ClusterName:ha-742000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:49:53.531551    7430 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:49:53.533841    7430 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 04:49:53.533929    7430 cni.go:84] Creating CNI manager for ""
	I0429 04:49:53.533935    7430 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 04:49:53.533996    7430 start.go:340] cluster config:
	{Name:ha-742000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-742000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:49:53.538368    7430 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:49:53.545539    7430 out.go:177] * Starting "ha-742000" primary control-plane node in "ha-742000" cluster
	I0429 04:49:53.548480    7430 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 04:49:53.548493    7430 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 04:49:53.548498    7430 cache.go:56] Caching tarball of preloaded images
	I0429 04:49:53.548548    7430 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 04:49:53.548553    7430 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 04:49:53.548605    7430 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/ha-742000/config.json ...
	I0429 04:49:53.549076    7430 start.go:360] acquireMachinesLock for ha-742000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:49:53.549115    7430 start.go:364] duration metric: took 32.083µs to acquireMachinesLock for "ha-742000"
	I0429 04:49:53.549125    7430 start.go:96] Skipping create...Using existing machine configuration
	I0429 04:49:53.549130    7430 fix.go:54] fixHost starting: 
	I0429 04:49:53.549248    7430 fix.go:112] recreateIfNeeded on ha-742000: state=Stopped err=<nil>
	W0429 04:49:53.549256    7430 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 04:49:53.557338    7430 out.go:177] * Restarting existing qemu2 VM for "ha-742000" ...
	I0429 04:49:53.561512    7430 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:44:02:06:83:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/disk.qcow2
	I0429 04:49:53.563553    7430 main.go:141] libmachine: STDOUT: 
	I0429 04:49:53.563576    7430 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:49:53.563605    7430 fix.go:56] duration metric: took 14.474834ms for fixHost
	I0429 04:49:53.563609    7430 start.go:83] releasing machines lock for "ha-742000", held for 14.490583ms
	W0429 04:49:53.563618    7430 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:49:53.563652    7430 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:49:53.563657    7430 start.go:728] Will try again in 5 seconds ...
	I0429 04:49:58.565830    7430 start.go:360] acquireMachinesLock for ha-742000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:49:58.566220    7430 start.go:364] duration metric: took 302.75µs to acquireMachinesLock for "ha-742000"
	I0429 04:49:58.566344    7430 start.go:96] Skipping create...Using existing machine configuration
	I0429 04:49:58.566365    7430 fix.go:54] fixHost starting: 
	I0429 04:49:58.567141    7430 fix.go:112] recreateIfNeeded on ha-742000: state=Stopped err=<nil>
	W0429 04:49:58.567167    7430 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 04:49:58.574499    7430 out.go:177] * Restarting existing qemu2 VM for "ha-742000" ...
	I0429 04:49:58.578718    7430 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:44:02:06:83:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/ha-742000/disk.qcow2
	I0429 04:49:58.587443    7430 main.go:141] libmachine: STDOUT: 
	I0429 04:49:58.587505    7430 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:49:58.587570    7430 fix.go:56] duration metric: took 21.205709ms for fixHost
	I0429 04:49:58.587587    7430 start.go:83] releasing machines lock for "ha-742000", held for 21.347583ms
	W0429 04:49:58.587777    7430 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-742000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-742000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:49:58.595550    7430 out.go:177] 
	W0429 04:49:58.599606    7430 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:49:58.599688    7430 out.go:239] * 
	* 
	W0429 04:49:58.602336    7430 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 04:49:58.612422    7430 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-742000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000: exit status 7 (70.365375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-742000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-742000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-742000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-742000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-742000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000: exit status 7 (31.901334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-742000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-742000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-742000 --control-plane -v=7 --alsologtostderr: exit status 83 (43.818958ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-742000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-742000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:49:58.835435    7446 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:49:58.835588    7446 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:58.835591    7446 out.go:304] Setting ErrFile to fd 2...
	I0429 04:49:58.835593    7446 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:49:58.835732    7446 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:49:58.835993    7446 mustload.go:65] Loading cluster: ha-742000
	I0429 04:49:58.836187    7446 config.go:182] Loaded profile config "ha-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:49:58.840871    7446 out.go:177] * The control-plane node ha-742000 host is not running: state=Stopped
	I0429 04:49:58.844798    7446 out.go:177]   To start a cluster, run: "minikube start -p ha-742000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-742000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000: exit status 7 (32.972291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-742000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-742000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-742000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-742000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-742000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-742000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-742000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-742000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-742000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-742000 -n ha-742000: exit status 7 (32.412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-742000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.11s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.85s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-678000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-678000 --driver=qemu2 : exit status 80 (9.778112584s)

                                                
                                                
-- stdout --
	* [image-678000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-678000" primary control-plane node in "image-678000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-678000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-678000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-678000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-678000 -n image-678000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-678000 -n image-678000: exit status 7 (70.2785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-678000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.8s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-838000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-838000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.803082833s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"07aa950b-6fad-48de-971c-3d4507b322e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-838000] minikube v1.33.0 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f9e294f4-6aea-45ba-b480-73bbe3046798","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18771"}}
	{"specversion":"1.0","id":"7b037591-1e1d-4c92-91ea-b4aa7beb29e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig"}}
	{"specversion":"1.0","id":"e10a89a4-8fef-4177-8767-4a57f2dd747d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"00007cb7-f3bc-4d9d-8ed4-f294cb155495","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0957dc45-db6f-4355-85fa-873cadcb21c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube"}}
	{"specversion":"1.0","id":"de1f9219-583b-4aa7-82ef-a95f602a671f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"520f5a52-4693-4be6-842b-0b4510ae8cb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"877675e7-d3ea-41d2-888b-e38c05a1dfd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"4db517be-1451-4e8e-9385-55aa32308e8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-838000\" primary control-plane node in \"json-output-838000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3b278c58-7d1b-4f1a-a14e-9ea1ee6a1f9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"8d4c4450-570f-4552-bfd1-abfbaeaaa436","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-838000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"b059e21f-54dd-4cbc-9fbc-3b244609398f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"0d8eb76d-b1e8-4615-a32b-566d1c995eea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"b6416ae0-7c05-46b0-a158-f79d8f3046d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-838000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"b3e908b3-e87f-4d8e-b269-69553990dae0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"d476c71b-a0ea-4569-86de-d92268c39482","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-838000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.80s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-838000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-838000 --output=json --user=testUser: exit status 83 (81.366125ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1b87a19a-b1cf-4d9a-a01f-34a403b31bfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-838000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"c687677b-55f5-4764-a6ec-6ca67012b240","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-838000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-838000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-838000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-838000 --output=json --user=testUser: exit status 83 (49.079458ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-838000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-838000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-838000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-838000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.35s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-153000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-153000 --driver=qemu2 : exit status 80 (9.908338792s)

                                                
                                                
-- stdout --
	* [first-153000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-153000" primary control-plane node in "first-153000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-153000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-153000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-153000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-04-29 04:50:32.8169 -0700 PDT m=+405.447110126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-154000 -n second-154000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-154000 -n second-154000: exit status 85 (80.689875ms)

                                                
                                                
-- stdout --
	* Profile "second-154000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-154000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-154000" host is not running, skipping log retrieval (state="* Profile \"second-154000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-154000\"")
helpers_test.go:175: Cleaning up "second-154000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-154000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-04-29 04:50:33.125063 -0700 PDT m=+405.755275710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-153000 -n first-153000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-153000 -n first-153000: exit status 7 (32.619583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-153000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-153000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-153000
--- FAIL: TestMinikubeProfile (10.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-220000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-220000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.860399s)

                                                
                                                
-- stdout --
	* [mount-start-1-220000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-220000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-220000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-220000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-220000 -n mount-start-1-220000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-220000 -n mount-start-1-220000: exit status 7 (70.809ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-220000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.93s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-368000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-368000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.902094417s)

                                                
                                                
-- stdout --
	* [multinode-368000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-368000" primary control-plane node in "multinode-368000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-368000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:50:43.544979    7612 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:50:43.545113    7612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:50:43.545117    7612 out.go:304] Setting ErrFile to fd 2...
	I0429 04:50:43.545119    7612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:50:43.545243    7612 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:50:43.546317    7612 out.go:298] Setting JSON to false
	I0429 04:50:43.562276    7612 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4814,"bootTime":1714386629,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:50:43.562351    7612 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:50:43.568817    7612 out.go:177] * [multinode-368000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:50:43.576786    7612 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 04:50:43.576864    7612 notify.go:220] Checking for updates...
	I0429 04:50:43.580831    7612 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:50:43.582309    7612 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:50:43.585788    7612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:50:43.588833    7612 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 04:50:43.591837    7612 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:50:43.594948    7612 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:50:43.598727    7612 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 04:50:43.605784    7612 start.go:297] selected driver: qemu2
	I0429 04:50:43.605791    7612 start.go:901] validating driver "qemu2" against <nil>
	I0429 04:50:43.605797    7612 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:50:43.608118    7612 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 04:50:43.611787    7612 out.go:177] * Automatically selected the socket_vmnet network
	I0429 04:50:43.614867    7612 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 04:50:43.614896    7612 cni.go:84] Creating CNI manager for ""
	I0429 04:50:43.614901    7612 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 04:50:43.614905    7612 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 04:50:43.614939    7612 start.go:340] cluster config:
	{Name:multinode-368000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-368000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:50:43.619386    7612 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:50:43.626780    7612 out.go:177] * Starting "multinode-368000" primary control-plane node in "multinode-368000" cluster
	I0429 04:50:43.629720    7612 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 04:50:43.629745    7612 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 04:50:43.629754    7612 cache.go:56] Caching tarball of preloaded images
	I0429 04:50:43.629823    7612 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 04:50:43.629828    7612 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 04:50:43.630028    7612 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/multinode-368000/config.json ...
	I0429 04:50:43.630050    7612 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/multinode-368000/config.json: {Name:mk38be94f929db335786cfcafd10f5e873b624fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 04:50:43.630271    7612 start.go:360] acquireMachinesLock for multinode-368000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:50:43.630305    7612 start.go:364] duration metric: took 27.875µs to acquireMachinesLock for "multinode-368000"
	I0429 04:50:43.630316    7612 start.go:93] Provisioning new machine with config: &{Name:multinode-368000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:multinode-368000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 04:50:43.630345    7612 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 04:50:43.637794    7612 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 04:50:43.654589    7612 start.go:159] libmachine.API.Create for "multinode-368000" (driver="qemu2")
	I0429 04:50:43.654624    7612 client.go:168] LocalClient.Create starting
	I0429 04:50:43.654689    7612 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 04:50:43.654717    7612 main.go:141] libmachine: Decoding PEM data...
	I0429 04:50:43.654726    7612 main.go:141] libmachine: Parsing certificate...
	I0429 04:50:43.654765    7612 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 04:50:43.654787    7612 main.go:141] libmachine: Decoding PEM data...
	I0429 04:50:43.654792    7612 main.go:141] libmachine: Parsing certificate...
	I0429 04:50:43.655205    7612 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 04:50:43.797347    7612 main.go:141] libmachine: Creating SSH key...
	I0429 04:50:43.892731    7612 main.go:141] libmachine: Creating Disk image...
	I0429 04:50:43.892736    7612 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 04:50:43.892907    7612 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/disk.qcow2
	I0429 04:50:43.905439    7612 main.go:141] libmachine: STDOUT: 
	I0429 04:50:43.905463    7612 main.go:141] libmachine: STDERR: 
	I0429 04:50:43.905525    7612 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/disk.qcow2 +20000M
	I0429 04:50:43.916638    7612 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 04:50:43.916655    7612 main.go:141] libmachine: STDERR: 
	I0429 04:50:43.916671    7612 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/disk.qcow2
	I0429 04:50:43.916676    7612 main.go:141] libmachine: Starting QEMU VM...
	I0429 04:50:43.916719    7612 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:ae:d9:79:5b:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/disk.qcow2
	I0429 04:50:43.918415    7612 main.go:141] libmachine: STDOUT: 
	I0429 04:50:43.918429    7612 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:50:43.918452    7612 client.go:171] duration metric: took 263.82125ms to LocalClient.Create
	I0429 04:50:45.920617    7612 start.go:128] duration metric: took 2.29027025s to createHost
	I0429 04:50:45.920680    7612 start.go:83] releasing machines lock for "multinode-368000", held for 2.290386084s
	W0429 04:50:45.920731    7612 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:50:45.930995    7612 out.go:177] * Deleting "multinode-368000" in qemu2 ...
	W0429 04:50:45.959914    7612 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:50:45.959950    7612 start.go:728] Will try again in 5 seconds ...
	I0429 04:50:50.962149    7612 start.go:360] acquireMachinesLock for multinode-368000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:50:50.962615    7612 start.go:364] duration metric: took 360.084µs to acquireMachinesLock for "multinode-368000"
	I0429 04:50:50.962756    7612 start.go:93] Provisioning new machine with config: &{Name:multinode-368000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:multinode-368000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 04:50:50.963070    7612 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 04:50:50.971676    7612 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 04:50:51.019556    7612 start.go:159] libmachine.API.Create for "multinode-368000" (driver="qemu2")
	I0429 04:50:51.019615    7612 client.go:168] LocalClient.Create starting
	I0429 04:50:51.019727    7612 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 04:50:51.019798    7612 main.go:141] libmachine: Decoding PEM data...
	I0429 04:50:51.019814    7612 main.go:141] libmachine: Parsing certificate...
	I0429 04:50:51.019879    7612 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 04:50:51.019921    7612 main.go:141] libmachine: Decoding PEM data...
	I0429 04:50:51.019935    7612 main.go:141] libmachine: Parsing certificate...
	I0429 04:50:51.020470    7612 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 04:50:51.172253    7612 main.go:141] libmachine: Creating SSH key...
	I0429 04:50:51.345546    7612 main.go:141] libmachine: Creating Disk image...
	I0429 04:50:51.345555    7612 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 04:50:51.345743    7612 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/disk.qcow2
	I0429 04:50:51.358653    7612 main.go:141] libmachine: STDOUT: 
	I0429 04:50:51.358673    7612 main.go:141] libmachine: STDERR: 
	I0429 04:50:51.358724    7612 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/disk.qcow2 +20000M
	I0429 04:50:51.369621    7612 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 04:50:51.369647    7612 main.go:141] libmachine: STDERR: 
	I0429 04:50:51.369660    7612 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/disk.qcow2
	I0429 04:50:51.369670    7612 main.go:141] libmachine: Starting QEMU VM...
	I0429 04:50:51.369711    7612 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:0e:8e:d2:8d:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/disk.qcow2
	I0429 04:50:51.371395    7612 main.go:141] libmachine: STDOUT: 
	I0429 04:50:51.371411    7612 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:50:51.371433    7612 client.go:171] duration metric: took 351.816667ms to LocalClient.Create
	I0429 04:50:53.373591    7612 start.go:128] duration metric: took 2.410501875s to createHost
	I0429 04:50:53.373667    7612 start.go:83] releasing machines lock for "multinode-368000", held for 2.411029541s
	W0429 04:50:53.374073    7612 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-368000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-368000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:50:53.384719    7612 out.go:177] 
	W0429 04:50:53.389717    7612 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:50:53.389743    7612 out.go:239] * 
	* 
	W0429 04:50:53.392610    7612 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 04:50:53.401693    7612 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-368000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000: exit status 7 (70.5775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-368000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (106.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-368000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-368000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (60.806917ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-368000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-368000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-368000 -- rollout status deployment/busybox: exit status 1 (58.102583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-368000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (59.584291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-368000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.845375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-368000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.957084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-368000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.75025ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-368000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.018625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-368000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.740542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-368000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.301666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-368000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.772958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-368000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.711792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-368000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.616958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-368000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.26975ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-368000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.932917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-368000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-368000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-368000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.13425ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-368000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-368000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-368000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.51375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-368000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-368000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-368000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.670125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-368000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000: exit status 7 (32.126833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-368000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (106.32s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-368000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.981375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-368000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000: exit status 7 (32.393083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-368000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-368000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-368000 -v 3 --alsologtostderr: exit status 83 (43.070834ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-368000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-368000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:52:39.931480    7704 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:52:39.931627    7704 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:52:39.931631    7704 out.go:304] Setting ErrFile to fd 2...
	I0429 04:52:39.931633    7704 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:52:39.931752    7704 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:52:39.931977    7704 mustload.go:65] Loading cluster: multinode-368000
	I0429 04:52:39.932154    7704 config.go:182] Loaded profile config "multinode-368000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:52:39.935310    7704 out.go:177] * The control-plane node multinode-368000 host is not running: state=Stopped
	I0429 04:52:39.939292    7704 out.go:177]   To start a cluster, run: "minikube start -p multinode-368000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-368000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000: exit status 7 (32.876167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-368000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-368000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-368000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.719875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-368000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-368000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-368000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000: exit status 7 (32.270708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-368000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-368000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-368000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-368000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"multinode-368000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000: exit status 7 (32.304708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-368000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-368000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-368000 status --output json --alsologtostderr: exit status 7 (32.430333ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-368000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:52:40.176270    7717 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:52:40.176403    7717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:52:40.176406    7717 out.go:304] Setting ErrFile to fd 2...
	I0429 04:52:40.176409    7717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:52:40.176537    7717 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:52:40.176661    7717 out.go:298] Setting JSON to true
	I0429 04:52:40.176675    7717 mustload.go:65] Loading cluster: multinode-368000
	I0429 04:52:40.176731    7717 notify.go:220] Checking for updates...
	I0429 04:52:40.176852    7717 config.go:182] Loaded profile config "multinode-368000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:52:40.176858    7717 status.go:255] checking status of multinode-368000 ...
	I0429 04:52:40.177061    7717 status.go:330] multinode-368000 host status = "Stopped" (err=<nil>)
	I0429 04:52:40.177064    7717 status.go:343] host is not running, skipping remaining checks
	I0429 04:52:40.177066    7717 status.go:257] multinode-368000 status: &{Name:multinode-368000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-368000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000: exit status 7 (32.069542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-368000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-368000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-368000 node stop m03: exit status 85 (49.1485ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-368000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-368000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-368000 status: exit status 7 (32.045375ms)

                                                
                                                
-- stdout --
	multinode-368000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-368000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-368000 status --alsologtostderr: exit status 7 (32.147958ms)

                                                
                                                
-- stdout --
	multinode-368000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:52:40.322389    7725 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:52:40.322552    7725 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:52:40.322555    7725 out.go:304] Setting ErrFile to fd 2...
	I0429 04:52:40.322557    7725 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:52:40.322691    7725 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:52:40.322812    7725 out.go:298] Setting JSON to false
	I0429 04:52:40.322824    7725 mustload.go:65] Loading cluster: multinode-368000
	I0429 04:52:40.322893    7725 notify.go:220] Checking for updates...
	I0429 04:52:40.323017    7725 config.go:182] Loaded profile config "multinode-368000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:52:40.323022    7725 status.go:255] checking status of multinode-368000 ...
	I0429 04:52:40.323229    7725 status.go:330] multinode-368000 host status = "Stopped" (err=<nil>)
	I0429 04:52:40.323233    7725 status.go:343] host is not running, skipping remaining checks
	I0429 04:52:40.323235    7725 status.go:257] multinode-368000 status: &{Name:multinode-368000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-368000 status --alsologtostderr": multinode-368000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000: exit status 7 (32.164083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-368000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (56.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-368000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-368000 node start m03 -v=7 --alsologtostderr: exit status 85 (50.147583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:52:40.387052    7729 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:52:40.387445    7729 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:52:40.387449    7729 out.go:304] Setting ErrFile to fd 2...
	I0429 04:52:40.387451    7729 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:52:40.387617    7729 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:52:40.387841    7729 mustload.go:65] Loading cluster: multinode-368000
	I0429 04:52:40.388041    7729 config.go:182] Loaded profile config "multinode-368000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:52:40.391797    7729 out.go:177] 
	W0429 04:52:40.395828    7729 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0429 04:52:40.395832    7729 out.go:239] * 
	* 
	W0429 04:52:40.397638    7729 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 04:52:40.401660    7729 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0429 04:52:40.387052    7729 out.go:291] Setting OutFile to fd 1 ...
I0429 04:52:40.387445    7729 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:52:40.387449    7729 out.go:304] Setting ErrFile to fd 2...
I0429 04:52:40.387451    7729 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:52:40.387617    7729 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
I0429 04:52:40.387841    7729 mustload.go:65] Loading cluster: multinode-368000
I0429 04:52:40.388041    7729 config.go:182] Loaded profile config "multinode-368000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 04:52:40.391797    7729 out.go:177] 
W0429 04:52:40.395828    7729 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0429 04:52:40.395832    7729 out.go:239] * 
* 
W0429 04:52:40.397638    7729 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0429 04:52:40.401660    7729 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-368000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-368000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-368000 status -v=7 --alsologtostderr: exit status 7 (32.505333ms)

                                                
                                                
-- stdout --
	multinode-368000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:52:40.437671    7731 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:52:40.437813    7731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:52:40.437816    7731 out.go:304] Setting ErrFile to fd 2...
	I0429 04:52:40.437818    7731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:52:40.437943    7731 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:52:40.438060    7731 out.go:298] Setting JSON to false
	I0429 04:52:40.438071    7731 mustload.go:65] Loading cluster: multinode-368000
	I0429 04:52:40.438130    7731 notify.go:220] Checking for updates...
	I0429 04:52:40.438283    7731 config.go:182] Loaded profile config "multinode-368000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:52:40.438289    7731 status.go:255] checking status of multinode-368000 ...
	I0429 04:52:40.438493    7731 status.go:330] multinode-368000 host status = "Stopped" (err=<nil>)
	I0429 04:52:40.438497    7731 status.go:343] host is not running, skipping remaining checks
	I0429 04:52:40.438499    7731 status.go:257] multinode-368000 status: &{Name:multinode-368000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-368000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-368000 status -v=7 --alsologtostderr: exit status 7 (75.951875ms)

                                                
                                                
-- stdout --
	multinode-368000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:52:41.623589    7733 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:52:41.623781    7733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:52:41.623786    7733 out.go:304] Setting ErrFile to fd 2...
	I0429 04:52:41.623788    7733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:52:41.623968    7733 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:52:41.624124    7733 out.go:298] Setting JSON to false
	I0429 04:52:41.624138    7733 mustload.go:65] Loading cluster: multinode-368000
	I0429 04:52:41.624185    7733 notify.go:220] Checking for updates...
	I0429 04:52:41.624382    7733 config.go:182] Loaded profile config "multinode-368000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:52:41.624389    7733 status.go:255] checking status of multinode-368000 ...
	I0429 04:52:41.624657    7733 status.go:330] multinode-368000 host status = "Stopped" (err=<nil>)
	I0429 04:52:41.624661    7733 status.go:343] host is not running, skipping remaining checks
	I0429 04:52:41.624664    7733 status.go:257] multinode-368000 status: &{Name:multinode-368000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-368000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-368000 status -v=7 --alsologtostderr: exit status 7 (76.048708ms)

                                                
                                                
-- stdout --
	multinode-368000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:52:43.310046    7735 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:52:43.310238    7735 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:52:43.310243    7735 out.go:304] Setting ErrFile to fd 2...
	I0429 04:52:43.310246    7735 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:52:43.310397    7735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:52:43.310549    7735 out.go:298] Setting JSON to false
	I0429 04:52:43.310563    7735 mustload.go:65] Loading cluster: multinode-368000
	I0429 04:52:43.310605    7735 notify.go:220] Checking for updates...
	I0429 04:52:43.310824    7735 config.go:182] Loaded profile config "multinode-368000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:52:43.310831    7735 status.go:255] checking status of multinode-368000 ...
	I0429 04:52:43.311109    7735 status.go:330] multinode-368000 host status = "Stopped" (err=<nil>)
	I0429 04:52:43.311113    7735 status.go:343] host is not running, skipping remaining checks
	I0429 04:52:43.311116    7735 status.go:257] multinode-368000 status: &{Name:multinode-368000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-368000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-368000 status -v=7 --alsologtostderr: exit status 7 (76.328292ms)

                                                
                                                
-- stdout --
	multinode-368000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:52:46.715646    7737 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:52:46.715826    7737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:52:46.715830    7737 out.go:304] Setting ErrFile to fd 2...
	I0429 04:52:46.715833    7737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:52:46.716000    7737 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:52:46.716139    7737 out.go:298] Setting JSON to false
	I0429 04:52:46.716152    7737 mustload.go:65] Loading cluster: multinode-368000
	I0429 04:52:46.716184    7737 notify.go:220] Checking for updates...
	I0429 04:52:46.716409    7737 config.go:182] Loaded profile config "multinode-368000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:52:46.716415    7737 status.go:255] checking status of multinode-368000 ...
	I0429 04:52:46.716659    7737 status.go:330] multinode-368000 host status = "Stopped" (err=<nil>)
	I0429 04:52:46.716664    7737 status.go:343] host is not running, skipping remaining checks
	I0429 04:52:46.716666    7737 status.go:257] multinode-368000 status: &{Name:multinode-368000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-368000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-368000 status -v=7 --alsologtostderr: exit status 7 (74.902333ms)

                                                
                                                
-- stdout --
	multinode-368000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:52:51.352601    7742 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:52:51.352845    7742 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:52:51.352849    7742 out.go:304] Setting ErrFile to fd 2...
	I0429 04:52:51.352852    7742 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:52:51.353032    7742 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:52:51.353183    7742 out.go:298] Setting JSON to false
	I0429 04:52:51.353197    7742 mustload.go:65] Loading cluster: multinode-368000
	I0429 04:52:51.353236    7742 notify.go:220] Checking for updates...
	I0429 04:52:51.353460    7742 config.go:182] Loaded profile config "multinode-368000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:52:51.353468    7742 status.go:255] checking status of multinode-368000 ...
	I0429 04:52:51.353752    7742 status.go:330] multinode-368000 host status = "Stopped" (err=<nil>)
	I0429 04:52:51.353757    7742 status.go:343] host is not running, skipping remaining checks
	I0429 04:52:51.353760    7742 status.go:257] multinode-368000 status: &{Name:multinode-368000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-368000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-368000 status -v=7 --alsologtostderr: exit status 7 (76.978583ms)

                                                
                                                
-- stdout --
	multinode-368000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:52:55.619386    7744 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:52:55.619565    7744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:52:55.619569    7744 out.go:304] Setting ErrFile to fd 2...
	I0429 04:52:55.619572    7744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:52:55.619738    7744 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:52:55.619901    7744 out.go:298] Setting JSON to false
	I0429 04:52:55.619915    7744 mustload.go:65] Loading cluster: multinode-368000
	I0429 04:52:55.619953    7744 notify.go:220] Checking for updates...
	I0429 04:52:55.620161    7744 config.go:182] Loaded profile config "multinode-368000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:52:55.620168    7744 status.go:255] checking status of multinode-368000 ...
	I0429 04:52:55.620460    7744 status.go:330] multinode-368000 host status = "Stopped" (err=<nil>)
	I0429 04:52:55.620466    7744 status.go:343] host is not running, skipping remaining checks
	I0429 04:52:55.620468    7744 status.go:257] multinode-368000 status: &{Name:multinode-368000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-368000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-368000 status -v=7 --alsologtostderr: exit status 7 (74.617583ms)

                                                
                                                
-- stdout --
	multinode-368000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:53:05.999386    7746 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:53:05.999575    7746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:53:05.999579    7746 out.go:304] Setting ErrFile to fd 2...
	I0429 04:53:05.999583    7746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:53:05.999762    7746 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:53:05.999906    7746 out.go:298] Setting JSON to false
	I0429 04:53:05.999921    7746 mustload.go:65] Loading cluster: multinode-368000
	I0429 04:53:05.999968    7746 notify.go:220] Checking for updates...
	I0429 04:53:06.000163    7746 config.go:182] Loaded profile config "multinode-368000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:53:06.000171    7746 status.go:255] checking status of multinode-368000 ...
	I0429 04:53:06.000444    7746 status.go:330] multinode-368000 host status = "Stopped" (err=<nil>)
	I0429 04:53:06.000448    7746 status.go:343] host is not running, skipping remaining checks
	I0429 04:53:06.000451    7746 status.go:257] multinode-368000 status: &{Name:multinode-368000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-368000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-368000 status -v=7 --alsologtostderr: exit status 7 (78.615333ms)

                                                
                                                
-- stdout --
	multinode-368000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:53:15.732850    7751 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:53:15.733073    7751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:53:15.733077    7751 out.go:304] Setting ErrFile to fd 2...
	I0429 04:53:15.733080    7751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:53:15.733267    7751 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:53:15.733431    7751 out.go:298] Setting JSON to false
	I0429 04:53:15.733444    7751 mustload.go:65] Loading cluster: multinode-368000
	I0429 04:53:15.733483    7751 notify.go:220] Checking for updates...
	I0429 04:53:15.733726    7751 config.go:182] Loaded profile config "multinode-368000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:53:15.733733    7751 status.go:255] checking status of multinode-368000 ...
	I0429 04:53:15.734009    7751 status.go:330] multinode-368000 host status = "Stopped" (err=<nil>)
	I0429 04:53:15.734015    7751 status.go:343] host is not running, skipping remaining checks
	I0429 04:53:15.734017    7751 status.go:257] multinode-368000 status: &{Name:multinode-368000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-368000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-368000 status -v=7 --alsologtostderr: exit status 7 (74.0745ms)

                                                
                                                
-- stdout --
	multinode-368000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:53:37.043381    7756 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:53:37.043599    7756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:53:37.043603    7756 out.go:304] Setting ErrFile to fd 2...
	I0429 04:53:37.043606    7756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:53:37.043773    7756 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:53:37.043937    7756 out.go:298] Setting JSON to false
	I0429 04:53:37.043951    7756 mustload.go:65] Loading cluster: multinode-368000
	I0429 04:53:37.043989    7756 notify.go:220] Checking for updates...
	I0429 04:53:37.044212    7756 config.go:182] Loaded profile config "multinode-368000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:53:37.044219    7756 status.go:255] checking status of multinode-368000 ...
	I0429 04:53:37.044484    7756 status.go:330] multinode-368000 host status = "Stopped" (err=<nil>)
	I0429 04:53:37.044488    7756 status.go:343] host is not running, skipping remaining checks
	I0429 04:53:37.044491    7756 status.go:257] multinode-368000 status: &{Name:multinode-368000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-368000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000: exit status 7 (34.778958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-368000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (56.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-368000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-368000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-368000: (3.105494s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-368000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-368000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.229389625s)

                                                
                                                
-- stdout --
	* [multinode-368000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-368000" primary control-plane node in "multinode-368000" cluster
	* Restarting existing qemu2 VM for "multinode-368000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-368000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:53:40.285188    7780 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:53:40.285385    7780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:53:40.285389    7780 out.go:304] Setting ErrFile to fd 2...
	I0429 04:53:40.285392    7780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:53:40.285881    7780 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:53:40.287685    7780 out.go:298] Setting JSON to false
	I0429 04:53:40.306541    7780 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4991,"bootTime":1714386629,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:53:40.306610    7780 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:53:40.311718    7780 out.go:177] * [multinode-368000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:53:40.318753    7780 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 04:53:40.322593    7780 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:53:40.318821    7780 notify.go:220] Checking for updates...
	I0429 04:53:40.328593    7780 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:53:40.331621    7780 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:53:40.334658    7780 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 04:53:40.341768    7780 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:53:40.344955    7780 config.go:182] Loaded profile config "multinode-368000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:53:40.345001    7780 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:53:40.349585    7780 out.go:177] * Using the qemu2 driver based on existing profile
	I0429 04:53:40.356652    7780 start.go:297] selected driver: qemu2
	I0429 04:53:40.356660    7780 start.go:901] validating driver "qemu2" against &{Name:multinode-368000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:multinode-368000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:53:40.356745    7780 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:53:40.359142    7780 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 04:53:40.359190    7780 cni.go:84] Creating CNI manager for ""
	I0429 04:53:40.359196    7780 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 04:53:40.359247    7780 start.go:340] cluster config:
	{Name:multinode-368000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-368000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:53:40.363652    7780 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:53:40.370637    7780 out.go:177] * Starting "multinode-368000" primary control-plane node in "multinode-368000" cluster
	I0429 04:53:40.374651    7780 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 04:53:40.374669    7780 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 04:53:40.374675    7780 cache.go:56] Caching tarball of preloaded images
	I0429 04:53:40.374749    7780 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 04:53:40.374755    7780 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 04:53:40.374820    7780 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/multinode-368000/config.json ...
	I0429 04:53:40.375313    7780 start.go:360] acquireMachinesLock for multinode-368000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:53:40.375349    7780 start.go:364] duration metric: took 30.209µs to acquireMachinesLock for "multinode-368000"
	I0429 04:53:40.375359    7780 start.go:96] Skipping create...Using existing machine configuration
	I0429 04:53:40.375366    7780 fix.go:54] fixHost starting: 
	I0429 04:53:40.375482    7780 fix.go:112] recreateIfNeeded on multinode-368000: state=Stopped err=<nil>
	W0429 04:53:40.375490    7780 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 04:53:40.383780    7780 out.go:177] * Restarting existing qemu2 VM for "multinode-368000" ...
	I0429 04:53:40.387634    7780 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:0e:8e:d2:8d:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/disk.qcow2
	I0429 04:53:40.389805    7780 main.go:141] libmachine: STDOUT: 
	I0429 04:53:40.389838    7780 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:53:40.389868    7780 fix.go:56] duration metric: took 14.501333ms for fixHost
	I0429 04:53:40.389873    7780 start.go:83] releasing machines lock for "multinode-368000", held for 14.519417ms
	W0429 04:53:40.389881    7780 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:53:40.389921    7780 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:53:40.389926    7780 start.go:728] Will try again in 5 seconds ...
	I0429 04:53:45.392080    7780 start.go:360] acquireMachinesLock for multinode-368000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:53:45.392544    7780 start.go:364] duration metric: took 326.667µs to acquireMachinesLock for "multinode-368000"
	I0429 04:53:45.392692    7780 start.go:96] Skipping create...Using existing machine configuration
	I0429 04:53:45.392712    7780 fix.go:54] fixHost starting: 
	I0429 04:53:45.393464    7780 fix.go:112] recreateIfNeeded on multinode-368000: state=Stopped err=<nil>
	W0429 04:53:45.393490    7780 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 04:53:45.397876    7780 out.go:177] * Restarting existing qemu2 VM for "multinode-368000" ...
	I0429 04:53:45.402134    7780 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:0e:8e:d2:8d:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/disk.qcow2
	I0429 04:53:45.411084    7780 main.go:141] libmachine: STDOUT: 
	I0429 04:53:45.411138    7780 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:53:45.411204    7780 fix.go:56] duration metric: took 18.493167ms for fixHost
	I0429 04:53:45.411228    7780 start.go:83] releasing machines lock for "multinode-368000", held for 18.645375ms
	W0429 04:53:45.411389    7780 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-368000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-368000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:53:45.418890    7780 out.go:177] 
	W0429 04:53:45.422858    7780 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:53:45.422875    7780 out.go:239] * 
	* 
	W0429 04:53:45.425412    7780 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 04:53:45.433657    7780 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-368000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-368000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000: exit status 7 (34.333042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-368000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.47s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-368000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-368000 node delete m03: exit status 83 (43.171791ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-368000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-368000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-368000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-368000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-368000 status --alsologtostderr: exit status 7 (32.159834ms)

                                                
                                                
-- stdout --
	multinode-368000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:53:45.623196    7794 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:53:45.623360    7794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:53:45.623363    7794 out.go:304] Setting ErrFile to fd 2...
	I0429 04:53:45.623365    7794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:53:45.623488    7794 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:53:45.623607    7794 out.go:298] Setting JSON to false
	I0429 04:53:45.623618    7794 mustload.go:65] Loading cluster: multinode-368000
	I0429 04:53:45.623682    7794 notify.go:220] Checking for updates...
	I0429 04:53:45.623840    7794 config.go:182] Loaded profile config "multinode-368000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:53:45.623847    7794 status.go:255] checking status of multinode-368000 ...
	I0429 04:53:45.624061    7794 status.go:330] multinode-368000 host status = "Stopped" (err=<nil>)
	I0429 04:53:45.624064    7794 status.go:343] host is not running, skipping remaining checks
	I0429 04:53:45.624067    7794 status.go:257] multinode-368000 status: &{Name:multinode-368000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-368000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000: exit status 7 (32.390958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-368000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-368000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-368000 stop: (3.088938541s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-368000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-368000 status: exit status 7 (67.773334ms)

                                                
                                                
-- stdout --
	multinode-368000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-368000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-368000 status --alsologtostderr: exit status 7 (33.844416ms)

                                                
                                                
-- stdout --
	multinode-368000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:53:48.846692    7818 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:53:48.846851    7818 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:53:48.846854    7818 out.go:304] Setting ErrFile to fd 2...
	I0429 04:53:48.846856    7818 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:53:48.846989    7818 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:53:48.847118    7818 out.go:298] Setting JSON to false
	I0429 04:53:48.847134    7818 mustload.go:65] Loading cluster: multinode-368000
	I0429 04:53:48.847193    7818 notify.go:220] Checking for updates...
	I0429 04:53:48.847359    7818 config.go:182] Loaded profile config "multinode-368000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:53:48.847365    7818 status.go:255] checking status of multinode-368000 ...
	I0429 04:53:48.847579    7818 status.go:330] multinode-368000 host status = "Stopped" (err=<nil>)
	I0429 04:53:48.847583    7818 status.go:343] host is not running, skipping remaining checks
	I0429 04:53:48.847585    7818 status.go:257] multinode-368000 status: &{Name:multinode-368000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-368000 status --alsologtostderr": multinode-368000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-368000 status --alsologtostderr": multinode-368000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000: exit status 7 (32.244583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-368000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-368000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-368000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.192668417s)

                                                
                                                
-- stdout --
	* [multinode-368000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-368000" primary control-plane node in "multinode-368000" cluster
	* Restarting existing qemu2 VM for "multinode-368000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-368000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:53:48.910791    7822 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:53:48.910925    7822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:53:48.910928    7822 out.go:304] Setting ErrFile to fd 2...
	I0429 04:53:48.910931    7822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:53:48.911041    7822 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:53:48.911987    7822 out.go:298] Setting JSON to false
	I0429 04:53:48.927915    7822 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4999,"bootTime":1714386629,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:53:48.927975    7822 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:53:48.933570    7822 out.go:177] * [multinode-368000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:53:48.945497    7822 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 04:53:48.941539    7822 notify.go:220] Checking for updates...
	I0429 04:53:48.951509    7822 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:53:48.954515    7822 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:53:48.957439    7822 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:53:48.960477    7822 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 04:53:48.963534    7822 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:53:48.966792    7822 config.go:182] Loaded profile config "multinode-368000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:53:48.967072    7822 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:53:48.971467    7822 out.go:177] * Using the qemu2 driver based on existing profile
	I0429 04:53:48.978438    7822 start.go:297] selected driver: qemu2
	I0429 04:53:48.978447    7822 start.go:901] validating driver "qemu2" against &{Name:multinode-368000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:multinode-368000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:53:48.978534    7822 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:53:48.980877    7822 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 04:53:48.980915    7822 cni.go:84] Creating CNI manager for ""
	I0429 04:53:48.980920    7822 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 04:53:48.980973    7822 start.go:340] cluster config:
	{Name:multinode-368000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-368000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:53:48.985342    7822 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:53:48.992481    7822 out.go:177] * Starting "multinode-368000" primary control-plane node in "multinode-368000" cluster
	I0429 04:53:48.996594    7822 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 04:53:48.996612    7822 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 04:53:48.996623    7822 cache.go:56] Caching tarball of preloaded images
	I0429 04:53:48.996685    7822 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 04:53:48.996692    7822 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 04:53:48.996748    7822 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/multinode-368000/config.json ...
	I0429 04:53:48.997239    7822 start.go:360] acquireMachinesLock for multinode-368000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:53:48.997267    7822 start.go:364] duration metric: took 22.125µs to acquireMachinesLock for "multinode-368000"
	I0429 04:53:48.997277    7822 start.go:96] Skipping create...Using existing machine configuration
	I0429 04:53:48.997282    7822 fix.go:54] fixHost starting: 
	I0429 04:53:48.997405    7822 fix.go:112] recreateIfNeeded on multinode-368000: state=Stopped err=<nil>
	W0429 04:53:48.997413    7822 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 04:53:49.005459    7822 out.go:177] * Restarting existing qemu2 VM for "multinode-368000" ...
	I0429 04:53:49.009448    7822 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:0e:8e:d2:8d:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/disk.qcow2
	I0429 04:53:49.011561    7822 main.go:141] libmachine: STDOUT: 
	I0429 04:53:49.011584    7822 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:53:49.011610    7822 fix.go:56] duration metric: took 14.328125ms for fixHost
	I0429 04:53:49.011614    7822 start.go:83] releasing machines lock for "multinode-368000", held for 14.343333ms
	W0429 04:53:49.011622    7822 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:53:49.011658    7822 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:53:49.011663    7822 start.go:728] Will try again in 5 seconds ...
	I0429 04:53:54.013799    7822 start.go:360] acquireMachinesLock for multinode-368000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:53:54.014210    7822 start.go:364] duration metric: took 330.959µs to acquireMachinesLock for "multinode-368000"
	I0429 04:53:54.014340    7822 start.go:96] Skipping create...Using existing machine configuration
	I0429 04:53:54.014358    7822 fix.go:54] fixHost starting: 
	I0429 04:53:54.015074    7822 fix.go:112] recreateIfNeeded on multinode-368000: state=Stopped err=<nil>
	W0429 04:53:54.015099    7822 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 04:53:54.020909    7822 out.go:177] * Restarting existing qemu2 VM for "multinode-368000" ...
	I0429 04:53:54.029731    7822 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:0e:8e:d2:8d:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/multinode-368000/disk.qcow2
	I0429 04:53:54.038697    7822 main.go:141] libmachine: STDOUT: 
	I0429 04:53:54.038759    7822 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:53:54.038828    7822 fix.go:56] duration metric: took 24.467292ms for fixHost
	I0429 04:53:54.038844    7822 start.go:83] releasing machines lock for "multinode-368000", held for 24.605416ms
	W0429 04:53:54.038996    7822 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-368000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-368000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:53:54.046519    7822 out.go:177] 
	W0429 04:53:54.049496    7822 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:53:54.049527    7822 out.go:239] * 
	* 
	W0429 04:53:54.052153    7822 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 04:53:54.059491    7822 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-368000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000: exit status 7 (73.366041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-368000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-368000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-368000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-368000-m01 --driver=qemu2 : exit status 80 (9.9155245s)

                                                
                                                
-- stdout --
	* [multinode-368000-m01] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-368000-m01" primary control-plane node in "multinode-368000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-368000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-368000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-368000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-368000-m02 --driver=qemu2 : exit status 80 (9.976018208s)

                                                
                                                
-- stdout --
	* [multinode-368000-m02] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-368000-m02" primary control-plane node in "multinode-368000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-368000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-368000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-368000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-368000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-368000: exit status 83 (82.494291ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-368000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-368000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-368000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-368000 -n multinode-368000: exit status 7 (32.994459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-368000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.15s)

                                                
                                    
x
+
TestPreload (10.26s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-555000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-555000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.081543417s)

                                                
                                                
-- stdout --
	* [test-preload-555000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-555000" primary control-plane node in "test-preload-555000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-555000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:54:14.467841    7876 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:54:14.467989    7876 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:54:14.467993    7876 out.go:304] Setting ErrFile to fd 2...
	I0429 04:54:14.467995    7876 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:54:14.468124    7876 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:54:14.469203    7876 out.go:298] Setting JSON to false
	I0429 04:54:14.485223    7876 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5025,"bootTime":1714386629,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:54:14.485294    7876 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:54:14.490500    7876 out.go:177] * [test-preload-555000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:54:14.498376    7876 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 04:54:14.502284    7876 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:54:14.498415    7876 notify.go:220] Checking for updates...
	I0429 04:54:14.508334    7876 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:54:14.509783    7876 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:54:14.513400    7876 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 04:54:14.516380    7876 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:54:14.519752    7876 config.go:182] Loaded profile config "multinode-368000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:54:14.519808    7876 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:54:14.527335    7876 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 04:54:14.534405    7876 start.go:297] selected driver: qemu2
	I0429 04:54:14.534414    7876 start.go:901] validating driver "qemu2" against <nil>
	I0429 04:54:14.534420    7876 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:54:14.536693    7876 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 04:54:14.539358    7876 out.go:177] * Automatically selected the socket_vmnet network
	I0429 04:54:14.542356    7876 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 04:54:14.542384    7876 cni.go:84] Creating CNI manager for ""
	I0429 04:54:14.542390    7876 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 04:54:14.542397    7876 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 04:54:14.542423    7876 start.go:340] cluster config:
	{Name:test-preload-555000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-555000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:54:14.546968    7876 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:54:14.554308    7876 out.go:177] * Starting "test-preload-555000" primary control-plane node in "test-preload-555000" cluster
	I0429 04:54:14.558385    7876 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0429 04:54:14.558440    7876 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/test-preload-555000/config.json ...
	I0429 04:54:14.558456    7876 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/test-preload-555000/config.json: {Name:mk5e447b81d0ab75c2c4c0933023f85ba0aaabd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 04:54:14.558493    7876 cache.go:107] acquiring lock: {Name:mk68e2e5c9190bb6f9238f94b632af0fb9eafc6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:54:14.558511    7876 cache.go:107] acquiring lock: {Name:mk1ce24c19a5e7089d0941b7d6b9aa0d3efe8313 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:54:14.558493    7876 cache.go:107] acquiring lock: {Name:mk569f9925b4e09ec0f8a2912e6f7665c1457fd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:54:14.558647    7876 cache.go:107] acquiring lock: {Name:mkfded5eb0aa2b7b7dff999859ebab269798184c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:54:14.558611    7876 cache.go:107] acquiring lock: {Name:mk062eb6afcc7c491dc089aefa57a28934454fd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:54:14.558654    7876 cache.go:107] acquiring lock: {Name:mk5bb3e4735f1dc80464d4454739d13cae28cff3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:54:14.558680    7876 cache.go:107] acquiring lock: {Name:mk3f128340f73d155673f62fa2a06bbb49957e74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:54:14.558690    7876 cache.go:107] acquiring lock: {Name:mk08c4ae0f9fcf97bf2da1fc61d291e887d8f392 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:54:14.558792    7876 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 04:54:14.558814    7876 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0429 04:54:14.558824    7876 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0429 04:54:14.558886    7876 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0429 04:54:14.558939    7876 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0429 04:54:14.558952    7876 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0429 04:54:14.558964    7876 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0429 04:54:14.558958    7876 start.go:360] acquireMachinesLock for test-preload-555000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:54:14.559046    7876 start.go:364] duration metric: took 48.084µs to acquireMachinesLock for "test-preload-555000"
	I0429 04:54:14.559048    7876 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0429 04:54:14.559060    7876 start.go:93] Provisioning new machine with config: &{Name:test-preload-555000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-555000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 04:54:14.559114    7876 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 04:54:14.566339    7876 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 04:54:14.569667    7876 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0429 04:54:14.569679    7876 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 04:54:14.569783    7876 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0429 04:54:14.569815    7876 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0429 04:54:14.570429    7876 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0429 04:54:14.574440    7876 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0429 04:54:14.574482    7876 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0429 04:54:14.574585    7876 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0429 04:54:14.582455    7876 start.go:159] libmachine.API.Create for "test-preload-555000" (driver="qemu2")
	I0429 04:54:14.582496    7876 client.go:168] LocalClient.Create starting
	I0429 04:54:14.582564    7876 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 04:54:14.582592    7876 main.go:141] libmachine: Decoding PEM data...
	I0429 04:54:14.582599    7876 main.go:141] libmachine: Parsing certificate...
	I0429 04:54:14.582648    7876 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 04:54:14.582670    7876 main.go:141] libmachine: Decoding PEM data...
	I0429 04:54:14.582676    7876 main.go:141] libmachine: Parsing certificate...
	I0429 04:54:14.582990    7876 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 04:54:14.755012    7876 main.go:141] libmachine: Creating SSH key...
	I0429 04:54:14.840775    7876 main.go:141] libmachine: Creating Disk image...
	I0429 04:54:14.840800    7876 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 04:54:14.841006    7876 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/test-preload-555000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/test-preload-555000/disk.qcow2
	I0429 04:54:14.853878    7876 main.go:141] libmachine: STDOUT: 
	I0429 04:54:14.853901    7876 main.go:141] libmachine: STDERR: 
	I0429 04:54:14.853950    7876 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/test-preload-555000/disk.qcow2 +20000M
	I0429 04:54:14.866447    7876 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 04:54:14.866467    7876 main.go:141] libmachine: STDERR: 
	I0429 04:54:14.866496    7876 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/test-preload-555000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/test-preload-555000/disk.qcow2
	I0429 04:54:14.866499    7876 main.go:141] libmachine: Starting QEMU VM...
	I0429 04:54:14.866531    7876 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/test-preload-555000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/test-preload-555000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/test-preload-555000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:87:55:15:0d:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/test-preload-555000/disk.qcow2
	I0429 04:54:14.868567    7876 main.go:141] libmachine: STDOUT: 
	I0429 04:54:14.868582    7876 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:54:14.868603    7876 client.go:171] duration metric: took 286.1035ms to LocalClient.Create
	I0429 04:54:16.762456    7876 cache.go:162] opening:  /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0429 04:54:16.869858    7876 start.go:128] duration metric: took 2.310744833s to createHost
	I0429 04:54:16.869897    7876 start.go:83] releasing machines lock for "test-preload-555000", held for 2.31086075s
	W0429 04:54:16.869954    7876 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:54:16.885861    7876 out.go:177] * Deleting "test-preload-555000" in qemu2 ...
	I0429 04:54:16.893152    7876 cache.go:157] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0429 04:54:16.893193    7876 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.334584917s
	I0429 04:54:16.893234    7876 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0429 04:54:16.913997    7876 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:54:16.914037    7876 start.go:728] Will try again in 5 seconds ...
	I0429 04:54:16.920654    7876 cache.go:162] opening:  /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0429 04:54:16.925533    7876 cache.go:162] opening:  /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W0429 04:54:16.991002    7876 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0429 04:54:16.991104    7876 cache.go:162] opening:  /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0429 04:54:17.542652    7876 cache.go:162] opening:  /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0429 04:54:17.549187    7876 cache.go:162] opening:  /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0429 04:54:17.550850    7876 cache.go:162] opening:  /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0429 04:54:17.651806    7876 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0429 04:54:17.651910    7876 cache.go:162] opening:  /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0429 04:54:18.226935    7876 cache.go:157] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0429 04:54:18.226989    7876 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 3.668520833s
	I0429 04:54:18.227017    7876 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0429 04:54:18.963172    7876 cache.go:157] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0429 04:54:18.963226    7876 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 4.404611583s
	I0429 04:54:18.963254    7876 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0429 04:54:19.278794    7876 cache.go:157] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0429 04:54:19.278870    7876 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.720223583s
	I0429 04:54:19.278899    7876 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0429 04:54:21.613248    7876 cache.go:157] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0429 04:54:21.613303    7876 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 7.054846333s
	I0429 04:54:21.613330    7876 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0429 04:54:21.857404    7876 cache.go:157] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0429 04:54:21.857456    7876 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 7.298945042s
	I0429 04:54:21.857481    7876 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0429 04:54:21.914174    7876 start.go:360] acquireMachinesLock for test-preload-555000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:54:21.914721    7876 start.go:364] duration metric: took 489.666µs to acquireMachinesLock for "test-preload-555000"
	I0429 04:54:21.914776    7876 start.go:93] Provisioning new machine with config: &{Name:test-preload-555000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-555000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 04:54:21.914983    7876 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 04:54:21.925631    7876 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 04:54:21.976052    7876 start.go:159] libmachine.API.Create for "test-preload-555000" (driver="qemu2")
	I0429 04:54:21.976110    7876 client.go:168] LocalClient.Create starting
	I0429 04:54:21.976226    7876 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 04:54:21.976295    7876 main.go:141] libmachine: Decoding PEM data...
	I0429 04:54:21.976319    7876 main.go:141] libmachine: Parsing certificate...
	I0429 04:54:21.976391    7876 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 04:54:21.976439    7876 main.go:141] libmachine: Decoding PEM data...
	I0429 04:54:21.976452    7876 main.go:141] libmachine: Parsing certificate...
	I0429 04:54:21.977008    7876 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 04:54:22.128542    7876 main.go:141] libmachine: Creating SSH key...
	I0429 04:54:22.181149    7876 cache.go:157] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0429 04:54:22.181163    7876 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 7.622741166s
	I0429 04:54:22.181170    7876 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0429 04:54:22.444863    7876 main.go:141] libmachine: Creating Disk image...
	I0429 04:54:22.444874    7876 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 04:54:22.445136    7876 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/test-preload-555000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/test-preload-555000/disk.qcow2
	I0429 04:54:22.458532    7876 main.go:141] libmachine: STDOUT: 
	I0429 04:54:22.458554    7876 main.go:141] libmachine: STDERR: 
	I0429 04:54:22.458609    7876 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/test-preload-555000/disk.qcow2 +20000M
	I0429 04:54:22.469840    7876 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 04:54:22.469866    7876 main.go:141] libmachine: STDERR: 
	I0429 04:54:22.469878    7876 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/test-preload-555000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/test-preload-555000/disk.qcow2
	I0429 04:54:22.469881    7876 main.go:141] libmachine: Starting QEMU VM...
	I0429 04:54:22.469951    7876 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/test-preload-555000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/test-preload-555000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/test-preload-555000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:5e:6b:c2:02:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/test-preload-555000/disk.qcow2
	I0429 04:54:22.471769    7876 main.go:141] libmachine: STDOUT: 
	I0429 04:54:22.471790    7876 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:54:22.471803    7876 client.go:171] duration metric: took 495.692667ms to LocalClient.Create
	I0429 04:54:24.474334    7876 start.go:128] duration metric: took 2.559316417s to createHost
	I0429 04:54:24.474392    7876 start.go:83] releasing machines lock for "test-preload-555000", held for 2.5596695s
	W0429 04:54:24.474746    7876 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-555000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-555000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:54:24.484179    7876 out.go:177] 
	W0429 04:54:24.489179    7876 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:54:24.489216    7876 out.go:239] * 
	* 
	W0429 04:54:24.492225    7876 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 04:54:24.502086    7876 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-555000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-04-29 04:54:24.521286 -0700 PDT m=+637.153549710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-555000 -n test-preload-555000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-555000 -n test-preload-555000: exit status 7 (68.926833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-555000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-555000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-555000
--- FAIL: TestPreload (10.26s)

                                                
                                    
x
+
TestScheduledStopUnix (10.08s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-515000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-515000 --memory=2048 --driver=qemu2 : exit status 80 (9.908924625s)

                                                
                                                
-- stdout --
	* [scheduled-stop-515000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-515000" primary control-plane node in "scheduled-stop-515000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-515000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-515000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-515000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-515000" primary control-plane node in "scheduled-stop-515000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-515000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-515000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-04-29 04:54:34.60404 -0700 PDT m=+647.236391418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-515000 -n scheduled-stop-515000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-515000 -n scheduled-stop-515000: exit status 7 (70.296542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-515000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-515000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-515000
--- FAIL: TestScheduledStopUnix (10.08s)

                                                
                                    
x
+
TestSkaffold (12.32s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe112432808 version
skaffold_test.go:63: skaffold version: v2.11.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-562000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-562000 --memory=2600 --driver=qemu2 : exit status 80 (10.009113667s)

                                                
                                                
-- stdout --
	* [skaffold-562000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-562000" primary control-plane node in "skaffold-562000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-562000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-562000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-562000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-562000" primary control-plane node in "skaffold-562000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-562000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-562000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-04-29 04:54:46.924993 -0700 PDT m=+659.557453418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-562000 -n skaffold-562000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-562000 -n skaffold-562000: exit status 7 (65.229542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-562000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-562000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-562000
--- FAIL: TestSkaffold (12.32s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (604.89s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1616177054 start -p running-upgrade-310000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1616177054 start -p running-upgrade-310000 --memory=2200 --vm-driver=qemu2 : (1m0.208060208s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-310000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-310000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m31.223248375s)

                                                
                                                
-- stdout --
	* [running-upgrade-310000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-310000" primary control-plane node in "running-upgrade-310000" cluster
	* Updating the running qemu2 "running-upgrade-310000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:56:28.657042    8269 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:56:28.657179    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:56:28.657182    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 04:56:28.657184    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:56:28.657307    8269 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:56:28.658267    8269 out.go:298] Setting JSON to false
	I0429 04:56:28.676349    8269 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5159,"bootTime":1714386629,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:56:28.676415    8269 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:56:28.681638    8269 out.go:177] * [running-upgrade-310000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:56:28.689598    8269 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 04:56:28.694595    8269 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:56:28.689666    8269 notify.go:220] Checking for updates...
	I0429 04:56:28.700575    8269 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:56:28.703578    8269 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:56:28.704910    8269 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 04:56:28.707527    8269 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:56:28.710819    8269 config.go:182] Loaded profile config "running-upgrade-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 04:56:28.714576    8269 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0429 04:56:28.717541    8269 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:56:28.721542    8269 out.go:177] * Using the qemu2 driver based on existing profile
	I0429 04:56:28.726563    8269 start.go:297] selected driver: qemu2
	I0429 04:56:28.726572    8269 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-310000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51195 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-310000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0429 04:56:28.726636    8269 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:56:28.729123    8269 cni.go:84] Creating CNI manager for ""
	I0429 04:56:28.729139    8269 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 04:56:28.729157    8269 start.go:340] cluster config:
	{Name:running-upgrade-310000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51195 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-310000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0429 04:56:28.729210    8269 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:56:28.736533    8269 out.go:177] * Starting "running-upgrade-310000" primary control-plane node in "running-upgrade-310000" cluster
	I0429 04:56:28.740490    8269 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0429 04:56:28.740505    8269 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0429 04:56:28.740510    8269 cache.go:56] Caching tarball of preloaded images
	I0429 04:56:28.740549    8269 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 04:56:28.740554    8269 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0429 04:56:28.740600    8269 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/config.json ...
	I0429 04:56:28.741043    8269 start.go:360] acquireMachinesLock for running-upgrade-310000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:56:28.741071    8269 start.go:364] duration metric: took 22.084µs to acquireMachinesLock for "running-upgrade-310000"
	I0429 04:56:28.741079    8269 start.go:96] Skipping create...Using existing machine configuration
	I0429 04:56:28.741084    8269 fix.go:54] fixHost starting: 
	I0429 04:56:28.741742    8269 fix.go:112] recreateIfNeeded on running-upgrade-310000: state=Running err=<nil>
	W0429 04:56:28.741749    8269 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 04:56:28.746534    8269 out.go:177] * Updating the running qemu2 "running-upgrade-310000" VM ...
	I0429 04:56:28.754559    8269 machine.go:94] provisionDockerMachine start ...
	I0429 04:56:28.754591    8269 main.go:141] libmachine: Using SSH client type: native
	I0429 04:56:28.754692    8269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005c1c80] 0x1005c44e0 <nil>  [] 0s} localhost 51162 <nil> <nil>}
	I0429 04:56:28.754696    8269 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 04:56:28.816278    8269 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-310000
	
	I0429 04:56:28.816294    8269 buildroot.go:166] provisioning hostname "running-upgrade-310000"
	I0429 04:56:28.816341    8269 main.go:141] libmachine: Using SSH client type: native
	I0429 04:56:28.816456    8269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005c1c80] 0x1005c44e0 <nil>  [] 0s} localhost 51162 <nil> <nil>}
	I0429 04:56:28.816463    8269 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-310000 && echo "running-upgrade-310000" | sudo tee /etc/hostname
	I0429 04:56:28.880907    8269 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-310000
	
	I0429 04:56:28.880961    8269 main.go:141] libmachine: Using SSH client type: native
	I0429 04:56:28.881063    8269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005c1c80] 0x1005c44e0 <nil>  [] 0s} localhost 51162 <nil> <nil>}
	I0429 04:56:28.881071    8269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-310000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-310000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-310000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 04:56:28.940747    8269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 04:56:28.940762    8269 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18771-6092/.minikube CaCertPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18771-6092/.minikube}
	I0429 04:56:28.940770    8269 buildroot.go:174] setting up certificates
	I0429 04:56:28.940775    8269 provision.go:84] configureAuth start
	I0429 04:56:28.940779    8269 provision.go:143] copyHostCerts
	I0429 04:56:28.940852    8269 exec_runner.go:144] found /Users/jenkins/minikube-integration/18771-6092/.minikube/ca.pem, removing ...
	I0429 04:56:28.940858    8269 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18771-6092/.minikube/ca.pem
	I0429 04:56:28.940980    8269 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18771-6092/.minikube/ca.pem (1082 bytes)
	I0429 04:56:28.941161    8269 exec_runner.go:144] found /Users/jenkins/minikube-integration/18771-6092/.minikube/cert.pem, removing ...
	I0429 04:56:28.941164    8269 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18771-6092/.minikube/cert.pem
	I0429 04:56:28.941207    8269 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18771-6092/.minikube/cert.pem (1123 bytes)
	I0429 04:56:28.941309    8269 exec_runner.go:144] found /Users/jenkins/minikube-integration/18771-6092/.minikube/key.pem, removing ...
	I0429 04:56:28.941312    8269 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18771-6092/.minikube/key.pem
	I0429 04:56:28.941355    8269 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18771-6092/.minikube/key.pem (1679 bytes)
	I0429 04:56:28.941456    8269 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-310000 san=[127.0.0.1 localhost minikube running-upgrade-310000]
	I0429 04:56:29.022300    8269 provision.go:177] copyRemoteCerts
	I0429 04:56:29.022346    8269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 04:56:29.022354    8269 sshutil.go:53] new ssh client: &{IP:localhost Port:51162 SSHKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/running-upgrade-310000/id_rsa Username:docker}
	I0429 04:56:29.058045    8269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 04:56:29.064898    8269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0429 04:56:29.071500    8269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 04:56:29.078322    8269 provision.go:87] duration metric: took 137.54375ms to configureAuth
	I0429 04:56:29.078332    8269 buildroot.go:189] setting minikube options for container-runtime
	I0429 04:56:29.078436    8269 config.go:182] Loaded profile config "running-upgrade-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 04:56:29.078467    8269 main.go:141] libmachine: Using SSH client type: native
	I0429 04:56:29.078545    8269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005c1c80] 0x1005c44e0 <nil>  [] 0s} localhost 51162 <nil> <nil>}
	I0429 04:56:29.078550    8269 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 04:56:29.140107    8269 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 04:56:29.140116    8269 buildroot.go:70] root file system type: tmpfs
	I0429 04:56:29.140171    8269 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 04:56:29.140210    8269 main.go:141] libmachine: Using SSH client type: native
	I0429 04:56:29.140324    8269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005c1c80] 0x1005c44e0 <nil>  [] 0s} localhost 51162 <nil> <nil>}
	I0429 04:56:29.140356    8269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 04:56:29.205164    8269 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 04:56:29.205237    8269 main.go:141] libmachine: Using SSH client type: native
	I0429 04:56:29.205345    8269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005c1c80] 0x1005c44e0 <nil>  [] 0s} localhost 51162 <nil> <nil>}
	I0429 04:56:29.205356    8269 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 04:56:29.269798    8269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 04:56:29.269810    8269 machine.go:97] duration metric: took 515.24875ms to provisionDockerMachine
	I0429 04:56:29.269815    8269 start.go:293] postStartSetup for "running-upgrade-310000" (driver="qemu2")
	I0429 04:56:29.269822    8269 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 04:56:29.269888    8269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 04:56:29.269897    8269 sshutil.go:53] new ssh client: &{IP:localhost Port:51162 SSHKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/running-upgrade-310000/id_rsa Username:docker}
	I0429 04:56:29.304540    8269 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 04:56:29.305828    8269 info.go:137] Remote host: Buildroot 2021.02.12
	I0429 04:56:29.305834    8269 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18771-6092/.minikube/addons for local assets ...
	I0429 04:56:29.305903    8269 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18771-6092/.minikube/files for local assets ...
	I0429 04:56:29.305997    8269 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18771-6092/.minikube/files/etc/ssl/certs/65002.pem -> 65002.pem in /etc/ssl/certs
	I0429 04:56:29.306097    8269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 04:56:29.308966    8269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/files/etc/ssl/certs/65002.pem --> /etc/ssl/certs/65002.pem (1708 bytes)
	I0429 04:56:29.316174    8269 start.go:296] duration metric: took 46.3535ms for postStartSetup
	I0429 04:56:29.316187    8269 fix.go:56] duration metric: took 575.1095ms for fixHost
	I0429 04:56:29.316221    8269 main.go:141] libmachine: Using SSH client type: native
	I0429 04:56:29.316328    8269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005c1c80] 0x1005c44e0 <nil>  [] 0s} localhost 51162 <nil> <nil>}
	I0429 04:56:29.316332    8269 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0429 04:56:29.375632    8269 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714391789.723715641
	
	I0429 04:56:29.375640    8269 fix.go:216] guest clock: 1714391789.723715641
	I0429 04:56:29.375644    8269 fix.go:229] Guest: 2024-04-29 04:56:29.723715641 -0700 PDT Remote: 2024-04-29 04:56:29.316189 -0700 PDT m=+0.682545668 (delta=407.526641ms)
	I0429 04:56:29.375655    8269 fix.go:200] guest clock delta is within tolerance: 407.526641ms
	I0429 04:56:29.375657    8269 start.go:83] releasing machines lock for "running-upgrade-310000", held for 634.589083ms
	I0429 04:56:29.375718    8269 ssh_runner.go:195] Run: cat /version.json
	I0429 04:56:29.375726    8269 sshutil.go:53] new ssh client: &{IP:localhost Port:51162 SSHKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/running-upgrade-310000/id_rsa Username:docker}
	I0429 04:56:29.375719    8269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 04:56:29.375752    8269 sshutil.go:53] new ssh client: &{IP:localhost Port:51162 SSHKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/running-upgrade-310000/id_rsa Username:docker}
	W0429 04:56:29.376283    8269 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51162: connect: connection refused
	I0429 04:56:29.376306    8269 retry.go:31] will retry after 228.948831ms: dial tcp [::1]:51162: connect: connection refused
	W0429 04:56:29.640226    8269 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0429 04:56:29.640284    8269 ssh_runner.go:195] Run: systemctl --version
	I0429 04:56:29.642107    8269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 04:56:29.643903    8269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 04:56:29.643930    8269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0429 04:56:29.646821    8269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0429 04:56:29.651107    8269 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 04:56:29.651114    8269 start.go:494] detecting cgroup driver to use...
	I0429 04:56:29.651222    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 04:56:29.656251    8269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0429 04:56:29.659074    8269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 04:56:29.662273    8269 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 04:56:29.662295    8269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 04:56:29.665470    8269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 04:56:29.668246    8269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 04:56:29.671030    8269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 04:56:29.673817    8269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 04:56:29.676899    8269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 04:56:29.679588    8269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 04:56:29.682661    8269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 04:56:29.686170    8269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 04:56:29.689145    8269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 04:56:29.691634    8269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 04:56:29.783278    8269 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 04:56:29.789716    8269 start.go:494] detecting cgroup driver to use...
	I0429 04:56:29.789775    8269 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 04:56:29.795640    8269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 04:56:29.800671    8269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 04:56:29.815503    8269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 04:56:29.820371    8269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 04:56:29.824594    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 04:56:29.829659    8269 ssh_runner.go:195] Run: which cri-dockerd
	I0429 04:56:29.830862    8269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 04:56:29.833882    8269 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 04:56:29.838809    8269 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 04:56:29.926251    8269 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 04:56:30.020436    8269 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 04:56:30.020491    8269 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 04:56:30.025997    8269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 04:56:30.113545    8269 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 04:56:31.453152    8269 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.339602417s)
	I0429 04:56:31.453213    8269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 04:56:31.458010    8269 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0429 04:56:31.464140    8269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 04:56:31.468485    8269 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 04:56:31.556209    8269 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 04:56:31.633511    8269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 04:56:31.707446    8269 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 04:56:31.713553    8269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 04:56:31.717718    8269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 04:56:31.797625    8269 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 04:56:31.837141    8269 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 04:56:31.837216    8269 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 04:56:31.839276    8269 start.go:562] Will wait 60s for crictl version
	I0429 04:56:31.839330    8269 ssh_runner.go:195] Run: which crictl
	I0429 04:56:31.840792    8269 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 04:56:31.852495    8269 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0429 04:56:31.852563    8269 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 04:56:31.864890    8269 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 04:56:31.885449    8269 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0429 04:56:31.885581    8269 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0429 04:56:31.887131    8269 kubeadm.go:877] updating cluster {Name:running-upgrade-310000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51195 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-310000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0429 04:56:31.887177    8269 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0429 04:56:31.887215    8269 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 04:56:31.897876    8269 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 04:56:31.897887    8269 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0429 04:56:31.897932    8269 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 04:56:31.901648    8269 ssh_runner.go:195] Run: which lz4
	I0429 04:56:31.902900    8269 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0429 04:56:31.904219    8269 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 04:56:31.904229    8269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0429 04:56:32.596179    8269 docker.go:649] duration metric: took 693.315667ms to copy over tarball
	I0429 04:56:32.596232    8269 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 04:56:33.891177    8269 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.29494275s)
	I0429 04:56:33.891190    8269 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 04:56:33.906989    8269 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 04:56:33.910454    8269 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0429 04:56:33.915694    8269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 04:56:34.000505    8269 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 04:56:35.312868    8269 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.312354667s)
	I0429 04:56:35.312958    8269 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 04:56:35.326366    8269 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 04:56:35.326374    8269 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0429 04:56:35.326380    8269 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 04:56:35.332841    8269 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0429 04:56:35.332880    8269 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0429 04:56:35.333007    8269 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0429 04:56:35.333005    8269 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0429 04:56:35.333068    8269 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 04:56:35.333190    8269 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0429 04:56:35.333277    8269 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0429 04:56:35.333563    8269 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0429 04:56:35.342469    8269 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0429 04:56:35.343008    8269 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0429 04:56:35.343336    8269 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 04:56:35.343362    8269 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0429 04:56:35.343353    8269 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0429 04:56:35.343379    8269 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0429 04:56:35.343380    8269 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0429 04:56:35.343413    8269 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	W0429 04:56:36.062594    8269 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0429 04:56:36.063284    8269 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 04:56:36.101985    8269 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0429 04:56:36.102067    8269 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 04:56:36.102167    8269 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 04:56:36.127750    8269 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0429 04:56:36.127914    8269 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0429 04:56:36.130176    8269 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0429 04:56:36.130193    8269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0429 04:56:36.161802    8269 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0429 04:56:36.161816    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0429 04:56:36.394382    8269 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0429 04:56:37.492728    8269 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0429 04:56:37.532104    8269 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0429 04:56:37.532141    8269 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0429 04:56:37.532235    8269 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0429 04:56:37.551681    8269 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0429 04:56:37.594079    8269 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0429 04:56:37.615363    8269 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0429 04:56:37.615391    8269 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0429 04:56:37.615468    8269 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0429 04:56:37.629980    8269 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0429 04:56:37.639217    8269 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0429 04:56:37.642294    8269 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0429 04:56:37.654604    8269 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0429 04:56:37.654627    8269 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0429 04:56:37.654692    8269 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0429 04:56:37.663215    8269 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0429 04:56:37.663236    8269 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0429 04:56:37.663297    8269 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0429 04:56:37.665985    8269 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0429 04:56:37.673779    8269 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0429 04:56:37.673891    8269 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0429 04:56:37.675236    8269 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0429 04:56:37.675248    8269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0429 04:56:37.682648    8269 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0429 04:56:37.682656    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0429 04:56:37.709555    8269 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0429 04:56:38.175356    8269 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0429 04:56:38.194260    8269 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0429 04:56:38.201037    8269 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0429 04:56:38.201058    8269 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0429 04:56:38.201124    8269 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W0429 04:56:38.202201    8269 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0429 04:56:38.202293    8269 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0429 04:56:38.225156    8269 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0429 04:56:38.225192    8269 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0429 04:56:38.225208    8269 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0429 04:56:38.225255    8269 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0429 04:56:38.228117    8269 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0429 04:56:38.228138    8269 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0429 04:56:38.228186    8269 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0429 04:56:38.237882    8269 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0429 04:56:38.240096    8269 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0429 04:56:38.240193    8269 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0429 04:56:38.241712    8269 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0429 04:56:38.241724    8269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0429 04:56:38.279073    8269 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0429 04:56:38.279086    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0429 04:56:38.316845    8269 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0429 04:56:38.316883    8269 cache_images.go:92] duration metric: took 2.990522834s to LoadCachedImages
	W0429 04:56:38.316927    8269 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0429 04:56:38.316933    8269 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0429 04:56:38.316999    8269 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-310000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-310000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 04:56:38.317054    8269 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 04:56:38.337746    8269 cni.go:84] Creating CNI manager for ""
	I0429 04:56:38.337757    8269 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 04:56:38.337761    8269 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 04:56:38.337770    8269 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-310000 NodeName:running-upgrade-310000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 04:56:38.337841    8269 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-310000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 04:56:38.337895    8269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0429 04:56:38.341077    8269 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 04:56:38.341110    8269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 04:56:38.344097    8269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0429 04:56:38.349221    8269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 04:56:38.354047    8269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0429 04:56:38.359155    8269 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0429 04:56:38.360541    8269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 04:56:38.430868    8269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 04:56:38.436035    8269 certs.go:68] Setting up /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000 for IP: 10.0.2.15
	I0429 04:56:38.436041    8269 certs.go:194] generating shared ca certs ...
	I0429 04:56:38.436049    8269 certs.go:226] acquiring lock for ca certs: {Name:mk6c1fe0c368234e15356f74a5a8907d9d0bc3ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 04:56:38.436283    8269 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/ca.key
	I0429 04:56:38.436317    8269 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/proxy-client-ca.key
	I0429 04:56:38.436321    8269 certs.go:256] generating profile certs ...
	I0429 04:56:38.436386    8269 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/client.key
	I0429 04:56:38.436399    8269 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/apiserver.key.c5707e36
	I0429 04:56:38.436411    8269 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/apiserver.crt.c5707e36 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0429 04:56:38.531130    8269 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/apiserver.crt.c5707e36 ...
	I0429 04:56:38.531135    8269 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/apiserver.crt.c5707e36: {Name:mk65b06acf26399913922f7b465f4032a84ddc85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 04:56:38.531374    8269 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/apiserver.key.c5707e36 ...
	I0429 04:56:38.531379    8269 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/apiserver.key.c5707e36: {Name:mk75e2baa10d05d8d313ae56c726cf5dc22af56f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 04:56:38.531496    8269 certs.go:381] copying /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/apiserver.crt.c5707e36 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/apiserver.crt
	I0429 04:56:38.531699    8269 certs.go:385] copying /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/apiserver.key.c5707e36 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/apiserver.key
	I0429 04:56:38.531877    8269 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/proxy-client.key
	I0429 04:56:38.531992    8269 certs.go:484] found cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/6500.pem (1338 bytes)
	W0429 04:56:38.532013    8269 certs.go:480] ignoring /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/6500_empty.pem, impossibly tiny 0 bytes
	I0429 04:56:38.532017    8269 certs.go:484] found cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 04:56:38.532038    8269 certs.go:484] found cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem (1082 bytes)
	I0429 04:56:38.532057    8269 certs.go:484] found cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem (1123 bytes)
	I0429 04:56:38.532074    8269 certs.go:484] found cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/key.pem (1679 bytes)
	I0429 04:56:38.532110    8269 certs.go:484] found cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/files/etc/ssl/certs/65002.pem (1708 bytes)
	I0429 04:56:38.532453    8269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 04:56:38.539589    8269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 04:56:38.546828    8269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 04:56:38.554369    8269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 04:56:38.561271    8269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0429 04:56:38.567839    8269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 04:56:38.575098    8269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 04:56:38.582235    8269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 04:56:38.588755    8269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/files/etc/ssl/certs/65002.pem --> /usr/share/ca-certificates/65002.pem (1708 bytes)
	I0429 04:56:38.595842    8269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 04:56:38.602471    8269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/6500.pem --> /usr/share/ca-certificates/6500.pem (1338 bytes)
	I0429 04:56:38.609032    8269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 04:56:38.614047    8269 ssh_runner.go:195] Run: openssl version
	I0429 04:56:38.615882    8269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65002.pem && ln -fs /usr/share/ca-certificates/65002.pem /etc/ssl/certs/65002.pem"
	I0429 04:56:38.618805    8269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65002.pem
	I0429 04:56:38.620108    8269 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 11:44 /usr/share/ca-certificates/65002.pem
	I0429 04:56:38.620133    8269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65002.pem
	I0429 04:56:38.621826    8269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65002.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 04:56:38.624902    8269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 04:56:38.627943    8269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 04:56:38.629395    8269 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I0429 04:56:38.629416    8269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 04:56:38.631249    8269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 04:56:38.633930    8269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6500.pem && ln -fs /usr/share/ca-certificates/6500.pem /etc/ssl/certs/6500.pem"
	I0429 04:56:38.637198    8269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6500.pem
	I0429 04:56:38.638616    8269 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 11:44 /usr/share/ca-certificates/6500.pem
	I0429 04:56:38.638633    8269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6500.pem
	I0429 04:56:38.640591    8269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6500.pem /etc/ssl/certs/51391683.0"
	I0429 04:56:38.643089    8269 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 04:56:38.644545    8269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 04:56:38.646147    8269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 04:56:38.647990    8269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 04:56:38.649658    8269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 04:56:38.651675    8269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 04:56:38.653297    8269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 04:56:38.655074    8269 kubeadm.go:391] StartCluster: {Name:running-upgrade-310000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51195 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-310000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0429 04:56:38.655139    8269 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 04:56:38.665587    8269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 04:56:38.668585    8269 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 04:56:38.668590    8269 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 04:56:38.668597    8269 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 04:56:38.668617    8269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 04:56:38.671294    8269 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 04:56:38.671330    8269 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-310000" does not appear in /Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:56:38.671349    8269 kubeconfig.go:62] /Users/jenkins/minikube-integration/18771-6092/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-310000" cluster setting kubeconfig missing "running-upgrade-310000" context setting]
	I0429 04:56:38.671508    8269 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/kubeconfig: {Name:mkc4105502c44b2331a2dd91226134a74ad93594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 04:56:38.672349    8269 kapi.go:59] client config for running-upgrade-310000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/client.key", CAFile:"/Users/jenkins/minikube-integration/18771-6092/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101953cb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 04:56:38.673155    8269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 04:56:38.675783    8269 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-310000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0429 04:56:38.675789    8269 kubeadm.go:1154] stopping kube-system containers ...
	I0429 04:56:38.675830    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 04:56:38.686644    8269 docker.go:483] Stopping containers: [f170ec153fbc 170472751da7 3b811af53782 81e1652274cc b258a81cdd2a cbd2ba51cafa fa1182944783 0c9b48a567e5 06c13f886bf8 d445bd598284 eb30aae51986 6484a7d6c0e2 150fea0775e1]
	I0429 04:56:38.686710    8269 ssh_runner.go:195] Run: docker stop f170ec153fbc 170472751da7 3b811af53782 81e1652274cc b258a81cdd2a cbd2ba51cafa fa1182944783 0c9b48a567e5 06c13f886bf8 d445bd598284 eb30aae51986 6484a7d6c0e2 150fea0775e1
	I0429 04:56:39.166414    8269 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 04:56:39.270697    8269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 04:56:39.274618    8269 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 Apr 29 11:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Apr 29 11:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Apr 29 11:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Apr 29 11:56 /etc/kubernetes/scheduler.conf
	
	I0429 04:56:39.274653    8269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/admin.conf
	I0429 04:56:39.277919    8269 kubeadm.go:162] "https://control-plane.minikube.internal:51195" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0429 04:56:39.277954    8269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 04:56:39.281588    8269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/kubelet.conf
	I0429 04:56:39.284745    8269 kubeadm.go:162] "https://control-plane.minikube.internal:51195" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0429 04:56:39.284767    8269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 04:56:39.287696    8269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/controller-manager.conf
	I0429 04:56:39.290407    8269 kubeadm.go:162] "https://control-plane.minikube.internal:51195" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0429 04:56:39.290431    8269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 04:56:39.293463    8269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/scheduler.conf
	I0429 04:56:39.296029    8269 kubeadm.go:162] "https://control-plane.minikube.internal:51195" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0429 04:56:39.296050    8269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 04:56:39.298595    8269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 04:56:39.301706    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 04:56:39.321487    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 04:56:39.705260    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 04:56:39.894786    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 04:56:39.921141    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 04:56:39.945131    8269 api_server.go:52] waiting for apiserver process to appear ...
	I0429 04:56:39.945217    8269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 04:56:40.447345    8269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 04:56:40.947547    8269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 04:56:41.447611    8269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 04:56:41.947270    8269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 04:56:42.447238    8269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 04:56:42.451280    8269 api_server.go:72] duration metric: took 2.506172584s to wait for apiserver process to appear ...
	I0429 04:56:42.451288    8269 api_server.go:88] waiting for apiserver healthz status ...
	I0429 04:56:42.451298    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 04:56:47.457780    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 04:56:47.457820    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 04:56:52.464549    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 04:56:52.464613    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 04:56:57.469864    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 04:56:57.469962    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 04:57:02.474418    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 04:57:02.474464    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 04:57:07.477843    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 04:57:07.477924    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 04:57:12.481326    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 04:57:12.481376    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 04:57:17.484402    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 04:57:17.484477    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 04:57:22.488026    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 04:57:22.488103    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 04:57:27.491460    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 04:57:27.491539    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 04:57:32.494729    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 04:57:32.494810    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 04:57:37.497368    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 04:57:37.497447    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 04:57:42.498728    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 04:57:42.499026    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 04:57:42.518479    8269 logs.go:276] 2 containers: [20e5fdcd56e8 fa1182944783]
	I0429 04:57:42.518581    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 04:57:42.532424    8269 logs.go:276] 2 containers: [2ab3e4df63bd d445bd598284]
	I0429 04:57:42.532504    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 04:57:42.544331    8269 logs.go:276] 1 containers: [bad32a53115b]
	I0429 04:57:42.544391    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 04:57:42.555281    8269 logs.go:276] 2 containers: [9859f21707de cbd2ba51cafa]
	I0429 04:57:42.555349    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 04:57:42.565710    8269 logs.go:276] 1 containers: [1cab5e65caa3]
	I0429 04:57:42.565769    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 04:57:42.576073    8269 logs.go:276] 2 containers: [951503dd4353 337d2bfa0452]
	I0429 04:57:42.576142    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 04:57:42.586147    8269 logs.go:276] 0 containers: []
	W0429 04:57:42.586159    8269 logs.go:278] No container was found matching "kindnet"
	I0429 04:57:42.586222    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 04:57:42.596390    8269 logs.go:276] 2 containers: [610ab484c7e0 164f3ba6e510]
	I0429 04:57:42.596407    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 04:57:42.596411    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 04:57:42.673415    8269 logs.go:123] Gathering logs for etcd [d445bd598284] ...
	I0429 04:57:42.673430    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d445bd598284"
	I0429 04:57:42.688408    8269 logs.go:123] Gathering logs for kube-scheduler [9859f21707de] ...
	I0429 04:57:42.688422    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9859f21707de"
	I0429 04:57:42.699927    8269 logs.go:123] Gathering logs for kube-scheduler [cbd2ba51cafa] ...
	I0429 04:57:42.699937    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbd2ba51cafa"
	I0429 04:57:42.716082    8269 logs.go:123] Gathering logs for kube-proxy [1cab5e65caa3] ...
	I0429 04:57:42.716092    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cab5e65caa3"
	I0429 04:57:42.727942    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 04:57:42.727955    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 04:57:42.765070    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 04:57:42.765166    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 04:57:42.765917    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 04:57:42.765921    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 04:57:42.770043    8269 logs.go:123] Gathering logs for kube-apiserver [20e5fdcd56e8] ...
	I0429 04:57:42.770051    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e5fdcd56e8"
	I0429 04:57:42.783436    8269 logs.go:123] Gathering logs for kube-apiserver [fa1182944783] ...
	I0429 04:57:42.783448    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1182944783"
	I0429 04:57:42.803990    8269 logs.go:123] Gathering logs for etcd [2ab3e4df63bd] ...
	I0429 04:57:42.804001    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ab3e4df63bd"
	I0429 04:57:42.818102    8269 logs.go:123] Gathering logs for coredns [bad32a53115b] ...
	I0429 04:57:42.818112    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bad32a53115b"
	I0429 04:57:42.829443    8269 logs.go:123] Gathering logs for kube-controller-manager [951503dd4353] ...
	I0429 04:57:42.829456    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 951503dd4353"
	I0429 04:57:42.846453    8269 logs.go:123] Gathering logs for kube-controller-manager [337d2bfa0452] ...
	I0429 04:57:42.846463    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 337d2bfa0452"
	I0429 04:57:42.857306    8269 logs.go:123] Gathering logs for container status ...
	I0429 04:57:42.857317    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 04:57:42.869154    8269 logs.go:123] Gathering logs for storage-provisioner [610ab484c7e0] ...
	I0429 04:57:42.869165    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 610ab484c7e0"
	I0429 04:57:42.880447    8269 logs.go:123] Gathering logs for storage-provisioner [164f3ba6e510] ...
	I0429 04:57:42.880459    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 164f3ba6e510"
	I0429 04:57:42.891163    8269 logs.go:123] Gathering logs for Docker ...
	I0429 04:57:42.891174    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 04:57:42.918131    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 04:57:42.918139    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 04:57:42.918160    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 04:57:42.918164    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 04:57:42.918167    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 04:57:42.918192    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 04:57:42.918203    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:57:52.922800    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 04:57:57.925718    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 04:57:57.926163    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 04:57:57.967014    8269 logs.go:276] 2 containers: [20e5fdcd56e8 fa1182944783]
	I0429 04:57:57.967156    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 04:57:57.989601    8269 logs.go:276] 2 containers: [2ab3e4df63bd d445bd598284]
	I0429 04:57:57.989692    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 04:57:58.004518    8269 logs.go:276] 1 containers: [bad32a53115b]
	I0429 04:57:58.004593    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 04:57:58.016591    8269 logs.go:276] 2 containers: [9859f21707de cbd2ba51cafa]
	I0429 04:57:58.016653    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 04:57:58.027898    8269 logs.go:276] 1 containers: [1cab5e65caa3]
	I0429 04:57:58.027968    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 04:57:58.038366    8269 logs.go:276] 2 containers: [951503dd4353 337d2bfa0452]
	I0429 04:57:58.038436    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 04:57:58.048492    8269 logs.go:276] 0 containers: []
	W0429 04:57:58.048502    8269 logs.go:278] No container was found matching "kindnet"
	I0429 04:57:58.048551    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 04:57:58.059287    8269 logs.go:276] 2 containers: [610ab484c7e0 164f3ba6e510]
	I0429 04:57:58.059305    8269 logs.go:123] Gathering logs for coredns [bad32a53115b] ...
	I0429 04:57:58.059311    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bad32a53115b"
	I0429 04:57:58.070541    8269 logs.go:123] Gathering logs for kube-proxy [1cab5e65caa3] ...
	I0429 04:57:58.070553    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cab5e65caa3"
	I0429 04:57:58.082132    8269 logs.go:123] Gathering logs for kube-controller-manager [337d2bfa0452] ...
	I0429 04:57:58.082144    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 337d2bfa0452"
	I0429 04:57:58.093210    8269 logs.go:123] Gathering logs for Docker ...
	I0429 04:57:58.093222    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 04:57:58.117883    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 04:57:58.117894    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 04:57:58.153385    8269 logs.go:123] Gathering logs for kube-apiserver [fa1182944783] ...
	I0429 04:57:58.153397    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1182944783"
	I0429 04:57:58.179247    8269 logs.go:123] Gathering logs for etcd [d445bd598284] ...
	I0429 04:57:58.179258    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d445bd598284"
	I0429 04:57:58.193799    8269 logs.go:123] Gathering logs for kube-scheduler [cbd2ba51cafa] ...
	I0429 04:57:58.193810    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbd2ba51cafa"
	I0429 04:57:58.208917    8269 logs.go:123] Gathering logs for storage-provisioner [610ab484c7e0] ...
	I0429 04:57:58.208926    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 610ab484c7e0"
	I0429 04:57:58.220832    8269 logs.go:123] Gathering logs for container status ...
	I0429 04:57:58.220843    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 04:57:58.240033    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 04:57:58.240043    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 04:57:58.245112    8269 logs.go:123] Gathering logs for kube-apiserver [20e5fdcd56e8] ...
	I0429 04:57:58.245123    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e5fdcd56e8"
	I0429 04:57:58.260292    8269 logs.go:123] Gathering logs for kube-scheduler [9859f21707de] ...
	I0429 04:57:58.260305    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9859f21707de"
	I0429 04:57:58.272236    8269 logs.go:123] Gathering logs for kube-controller-manager [951503dd4353] ...
	I0429 04:57:58.272247    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 951503dd4353"
	I0429 04:57:58.289663    8269 logs.go:123] Gathering logs for storage-provisioner [164f3ba6e510] ...
	I0429 04:57:58.289676    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 164f3ba6e510"
	I0429 04:57:58.300474    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 04:57:58.300485    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 04:57:58.336369    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 04:57:58.336462    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 04:57:58.337191    8269 logs.go:123] Gathering logs for etcd [2ab3e4df63bd] ...
	I0429 04:57:58.337195    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ab3e4df63bd"
	I0429 04:57:58.354827    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 04:57:58.354837    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 04:57:58.354867    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 04:57:58.354873    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 04:57:58.354877    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 04:57:58.354881    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 04:57:58.354885    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:58:08.359200    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 04:58:13.361292    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 04:58:13.361590    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 04:58:13.390441    8269 logs.go:276] 2 containers: [20e5fdcd56e8 fa1182944783]
	I0429 04:58:13.390571    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 04:58:13.409211    8269 logs.go:276] 2 containers: [2ab3e4df63bd d445bd598284]
	I0429 04:58:13.409304    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 04:58:13.422462    8269 logs.go:276] 1 containers: [bad32a53115b]
	I0429 04:58:13.422533    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 04:58:13.434064    8269 logs.go:276] 2 containers: [9859f21707de cbd2ba51cafa]
	I0429 04:58:13.434130    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 04:58:13.444811    8269 logs.go:276] 1 containers: [1cab5e65caa3]
	I0429 04:58:13.444869    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 04:58:13.455299    8269 logs.go:276] 2 containers: [951503dd4353 337d2bfa0452]
	I0429 04:58:13.455371    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 04:58:13.465802    8269 logs.go:276] 0 containers: []
	W0429 04:58:13.465812    8269 logs.go:278] No container was found matching "kindnet"
	I0429 04:58:13.465866    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 04:58:13.483230    8269 logs.go:276] 2 containers: [610ab484c7e0 164f3ba6e510]
	I0429 04:58:13.483247    8269 logs.go:123] Gathering logs for Docker ...
	I0429 04:58:13.483253    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 04:58:13.507090    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 04:58:13.507095    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 04:58:13.542289    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 04:58:13.542383    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 04:58:13.543092    8269 logs.go:123] Gathering logs for etcd [d445bd598284] ...
	I0429 04:58:13.543095    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d445bd598284"
	I0429 04:58:13.556852    8269 logs.go:123] Gathering logs for kube-controller-manager [951503dd4353] ...
	I0429 04:58:13.556871    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 951503dd4353"
	I0429 04:58:13.574355    8269 logs.go:123] Gathering logs for kube-controller-manager [337d2bfa0452] ...
	I0429 04:58:13.574364    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 337d2bfa0452"
	I0429 04:58:13.587714    8269 logs.go:123] Gathering logs for storage-provisioner [610ab484c7e0] ...
	I0429 04:58:13.587725    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 610ab484c7e0"
	I0429 04:58:13.598940    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 04:58:13.598951    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 04:58:13.634139    8269 logs.go:123] Gathering logs for etcd [2ab3e4df63bd] ...
	I0429 04:58:13.634150    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ab3e4df63bd"
	I0429 04:58:13.647979    8269 logs.go:123] Gathering logs for kube-scheduler [9859f21707de] ...
	I0429 04:58:13.647990    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9859f21707de"
	I0429 04:58:13.659479    8269 logs.go:123] Gathering logs for storage-provisioner [164f3ba6e510] ...
	I0429 04:58:13.659493    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 164f3ba6e510"
	I0429 04:58:13.671061    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 04:58:13.671073    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 04:58:13.675253    8269 logs.go:123] Gathering logs for coredns [bad32a53115b] ...
	I0429 04:58:13.675260    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bad32a53115b"
	I0429 04:58:13.686329    8269 logs.go:123] Gathering logs for kube-scheduler [cbd2ba51cafa] ...
	I0429 04:58:13.690002    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbd2ba51cafa"
	I0429 04:58:13.705544    8269 logs.go:123] Gathering logs for kube-proxy [1cab5e65caa3] ...
	I0429 04:58:13.705556    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cab5e65caa3"
	I0429 04:58:13.717287    8269 logs.go:123] Gathering logs for container status ...
	I0429 04:58:13.717300    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 04:58:13.730773    8269 logs.go:123] Gathering logs for kube-apiserver [20e5fdcd56e8] ...
	I0429 04:58:13.730788    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e5fdcd56e8"
	I0429 04:58:13.745143    8269 logs.go:123] Gathering logs for kube-apiserver [fa1182944783] ...
	I0429 04:58:13.745152    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1182944783"
	I0429 04:58:13.765053    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 04:58:13.765064    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 04:58:13.765094    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 04:58:13.765099    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 04:58:13.765102    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 04:58:13.765106    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 04:58:13.765109    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:58:23.769045    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 04:58:28.771828    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 04:58:28.772210    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 04:58:28.804450    8269 logs.go:276] 2 containers: [20e5fdcd56e8 fa1182944783]
	I0429 04:58:28.804596    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 04:58:28.827752    8269 logs.go:276] 2 containers: [2ab3e4df63bd d445bd598284]
	I0429 04:58:28.827845    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 04:58:28.849312    8269 logs.go:276] 1 containers: [bad32a53115b]
	I0429 04:58:28.849377    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 04:58:28.860292    8269 logs.go:276] 2 containers: [9859f21707de cbd2ba51cafa]
	I0429 04:58:28.860359    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 04:58:28.870792    8269 logs.go:276] 1 containers: [1cab5e65caa3]
	I0429 04:58:28.870853    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 04:58:28.881036    8269 logs.go:276] 2 containers: [951503dd4353 337d2bfa0452]
	I0429 04:58:28.881109    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 04:58:28.891413    8269 logs.go:276] 0 containers: []
	W0429 04:58:28.891422    8269 logs.go:278] No container was found matching "kindnet"
	I0429 04:58:28.891477    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 04:58:28.905929    8269 logs.go:276] 2 containers: [610ab484c7e0 164f3ba6e510]
	I0429 04:58:28.905945    8269 logs.go:123] Gathering logs for storage-provisioner [164f3ba6e510] ...
	I0429 04:58:28.905950    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 164f3ba6e510"
	I0429 04:58:28.917062    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 04:58:28.917072    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 04:58:28.953028    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 04:58:28.953123    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 04:58:28.953828    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 04:58:28.953833    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 04:58:28.957970    8269 logs.go:123] Gathering logs for coredns [bad32a53115b] ...
	I0429 04:58:28.957982    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bad32a53115b"
	I0429 04:58:28.969180    8269 logs.go:123] Gathering logs for kube-controller-manager [951503dd4353] ...
	I0429 04:58:28.969193    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 951503dd4353"
	I0429 04:58:28.986639    8269 logs.go:123] Gathering logs for kube-proxy [1cab5e65caa3] ...
	I0429 04:58:28.986648    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cab5e65caa3"
	I0429 04:58:28.998281    8269 logs.go:123] Gathering logs for container status ...
	I0429 04:58:28.998294    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 04:58:29.009975    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 04:58:29.009987    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 04:58:29.044717    8269 logs.go:123] Gathering logs for kube-apiserver [20e5fdcd56e8] ...
	I0429 04:58:29.044732    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e5fdcd56e8"
	I0429 04:58:29.058623    8269 logs.go:123] Gathering logs for kube-apiserver [fa1182944783] ...
	I0429 04:58:29.058637    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1182944783"
	I0429 04:58:29.078595    8269 logs.go:123] Gathering logs for kube-scheduler [cbd2ba51cafa] ...
	I0429 04:58:29.078607    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbd2ba51cafa"
	I0429 04:58:29.094221    8269 logs.go:123] Gathering logs for Docker ...
	I0429 04:58:29.094231    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 04:58:29.118001    8269 logs.go:123] Gathering logs for etcd [2ab3e4df63bd] ...
	I0429 04:58:29.118011    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ab3e4df63bd"
	I0429 04:58:29.131202    8269 logs.go:123] Gathering logs for etcd [d445bd598284] ...
	I0429 04:58:29.131214    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d445bd598284"
	I0429 04:58:29.145913    8269 logs.go:123] Gathering logs for kube-scheduler [9859f21707de] ...
	I0429 04:58:29.145925    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9859f21707de"
	I0429 04:58:29.157804    8269 logs.go:123] Gathering logs for kube-controller-manager [337d2bfa0452] ...
	I0429 04:58:29.157817    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 337d2bfa0452"
	I0429 04:58:29.168884    8269 logs.go:123] Gathering logs for storage-provisioner [610ab484c7e0] ...
	I0429 04:58:29.168899    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 610ab484c7e0"
	I0429 04:58:29.180473    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 04:58:29.180482    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 04:58:29.180507    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 04:58:29.180511    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 04:58:29.180514    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 04:58:29.180520    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 04:58:29.180523    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:58:39.182812    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 04:58:44.185423    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 04:58:44.185850    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 04:58:44.220419    8269 logs.go:276] 2 containers: [20e5fdcd56e8 fa1182944783]
	I0429 04:58:44.220549    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 04:58:44.240732    8269 logs.go:276] 2 containers: [2ab3e4df63bd d445bd598284]
	I0429 04:58:44.240830    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 04:58:44.254674    8269 logs.go:276] 1 containers: [bad32a53115b]
	I0429 04:58:44.254755    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 04:58:44.275630    8269 logs.go:276] 2 containers: [9859f21707de cbd2ba51cafa]
	I0429 04:58:44.275705    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 04:58:44.286500    8269 logs.go:276] 1 containers: [1cab5e65caa3]
	I0429 04:58:44.286567    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 04:58:44.296855    8269 logs.go:276] 2 containers: [951503dd4353 337d2bfa0452]
	I0429 04:58:44.296926    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 04:58:44.306919    8269 logs.go:276] 0 containers: []
	W0429 04:58:44.306932    8269 logs.go:278] No container was found matching "kindnet"
	I0429 04:58:44.306987    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 04:58:44.317371    8269 logs.go:276] 2 containers: [610ab484c7e0 164f3ba6e510]
	I0429 04:58:44.317388    8269 logs.go:123] Gathering logs for kube-scheduler [9859f21707de] ...
	I0429 04:58:44.317395    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9859f21707de"
	I0429 04:58:44.329584    8269 logs.go:123] Gathering logs for kube-scheduler [cbd2ba51cafa] ...
	I0429 04:58:44.329595    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbd2ba51cafa"
	I0429 04:58:44.349275    8269 logs.go:123] Gathering logs for kube-proxy [1cab5e65caa3] ...
	I0429 04:58:44.349288    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cab5e65caa3"
	I0429 04:58:44.364333    8269 logs.go:123] Gathering logs for Docker ...
	I0429 04:58:44.364344    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 04:58:44.388466    8269 logs.go:123] Gathering logs for container status ...
	I0429 04:58:44.388472    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 04:58:44.401153    8269 logs.go:123] Gathering logs for etcd [2ab3e4df63bd] ...
	I0429 04:58:44.401166    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ab3e4df63bd"
	I0429 04:58:44.418573    8269 logs.go:123] Gathering logs for coredns [bad32a53115b] ...
	I0429 04:58:44.418586    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bad32a53115b"
	I0429 04:58:44.429870    8269 logs.go:123] Gathering logs for kube-controller-manager [951503dd4353] ...
	I0429 04:58:44.429881    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 951503dd4353"
	I0429 04:58:44.450021    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 04:58:44.450034    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 04:58:44.487597    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 04:58:44.487698    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 04:58:44.488403    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 04:58:44.488408    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 04:58:44.522234    8269 logs.go:123] Gathering logs for kube-apiserver [fa1182944783] ...
	I0429 04:58:44.522243    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1182944783"
	I0429 04:58:44.541663    8269 logs.go:123] Gathering logs for kube-controller-manager [337d2bfa0452] ...
	I0429 04:58:44.541672    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 337d2bfa0452"
	I0429 04:58:44.552844    8269 logs.go:123] Gathering logs for storage-provisioner [164f3ba6e510] ...
	I0429 04:58:44.552857    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 164f3ba6e510"
	I0429 04:58:44.563921    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 04:58:44.563930    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 04:58:44.568596    8269 logs.go:123] Gathering logs for etcd [d445bd598284] ...
	I0429 04:58:44.568601    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d445bd598284"
	I0429 04:58:44.582296    8269 logs.go:123] Gathering logs for storage-provisioner [610ab484c7e0] ...
	I0429 04:58:44.582310    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 610ab484c7e0"
	I0429 04:58:44.593863    8269 logs.go:123] Gathering logs for kube-apiserver [20e5fdcd56e8] ...
	I0429 04:58:44.593872    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e5fdcd56e8"
	I0429 04:58:44.608001    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 04:58:44.608011    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 04:58:44.608034    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 04:58:44.608038    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 04:58:44.608041    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 04:58:44.608045    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 04:58:44.608047    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:58:54.610335    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 04:58:59.612685    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 04:58:59.612803    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 04:58:59.624844    8269 logs.go:276] 2 containers: [20e5fdcd56e8 fa1182944783]
	I0429 04:58:59.624928    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 04:58:59.637381    8269 logs.go:276] 2 containers: [2ab3e4df63bd d445bd598284]
	I0429 04:58:59.637460    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 04:58:59.649668    8269 logs.go:276] 1 containers: [bad32a53115b]
	I0429 04:58:59.649748    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 04:58:59.662053    8269 logs.go:276] 2 containers: [9859f21707de cbd2ba51cafa]
	I0429 04:58:59.662134    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 04:58:59.674172    8269 logs.go:276] 1 containers: [1cab5e65caa3]
	I0429 04:58:59.674251    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 04:58:59.686713    8269 logs.go:276] 2 containers: [951503dd4353 337d2bfa0452]
	I0429 04:58:59.686787    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 04:58:59.698833    8269 logs.go:276] 0 containers: []
	W0429 04:58:59.698845    8269 logs.go:278] No container was found matching "kindnet"
	I0429 04:58:59.698909    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 04:58:59.711344    8269 logs.go:276] 2 containers: [610ab484c7e0 164f3ba6e510]
	I0429 04:58:59.711364    8269 logs.go:123] Gathering logs for coredns [bad32a53115b] ...
	I0429 04:58:59.711380    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bad32a53115b"
	I0429 04:58:59.726044    8269 logs.go:123] Gathering logs for kube-proxy [1cab5e65caa3] ...
	I0429 04:58:59.726060    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cab5e65caa3"
	I0429 04:58:59.740388    8269 logs.go:123] Gathering logs for storage-provisioner [610ab484c7e0] ...
	I0429 04:58:59.740402    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 610ab484c7e0"
	I0429 04:58:59.754333    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 04:58:59.754344    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 04:58:59.795987    8269 logs.go:123] Gathering logs for kube-scheduler [9859f21707de] ...
	I0429 04:58:59.796001    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9859f21707de"
	I0429 04:58:59.809849    8269 logs.go:123] Gathering logs for kube-controller-manager [951503dd4353] ...
	I0429 04:58:59.809863    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 951503dd4353"
	I0429 04:58:59.829525    8269 logs.go:123] Gathering logs for Docker ...
	I0429 04:58:59.829541    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 04:58:59.856835    8269 logs.go:123] Gathering logs for container status ...
	I0429 04:58:59.856853    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 04:58:59.870545    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 04:58:59.870559    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 04:58:59.910236    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 04:58:59.910352    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 04:58:59.911182    8269 logs.go:123] Gathering logs for kube-apiserver [20e5fdcd56e8] ...
	I0429 04:58:59.911195    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e5fdcd56e8"
	I0429 04:58:59.927074    8269 logs.go:123] Gathering logs for etcd [2ab3e4df63bd] ...
	I0429 04:58:59.927087    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ab3e4df63bd"
	I0429 04:58:59.941376    8269 logs.go:123] Gathering logs for etcd [d445bd598284] ...
	I0429 04:58:59.941388    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d445bd598284"
	I0429 04:58:59.957003    8269 logs.go:123] Gathering logs for kube-scheduler [cbd2ba51cafa] ...
	I0429 04:58:59.957014    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbd2ba51cafa"
	I0429 04:58:59.972979    8269 logs.go:123] Gathering logs for kube-controller-manager [337d2bfa0452] ...
	I0429 04:58:59.972991    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 337d2bfa0452"
	I0429 04:58:59.984545    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 04:58:59.984558    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 04:58:59.989456    8269 logs.go:123] Gathering logs for kube-apiserver [fa1182944783] ...
	I0429 04:58:59.989466    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1182944783"
	I0429 04:59:00.010077    8269 logs.go:123] Gathering logs for storage-provisioner [164f3ba6e510] ...
	I0429 04:59:00.010090    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 164f3ba6e510"
	I0429 04:59:00.022013    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 04:59:00.022026    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 04:59:00.022055    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 04:59:00.022060    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 04:59:00.022063    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 04:59:00.022067    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 04:59:00.022070    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:59:10.024857    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 04:59:15.027067    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 04:59:15.027176    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 04:59:15.038607    8269 logs.go:276] 2 containers: [20e5fdcd56e8 fa1182944783]
	I0429 04:59:15.038681    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 04:59:15.050153    8269 logs.go:276] 2 containers: [2ab3e4df63bd d445bd598284]
	I0429 04:59:15.050228    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 04:59:15.066552    8269 logs.go:276] 1 containers: [bad32a53115b]
	I0429 04:59:15.066625    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 04:59:15.078283    8269 logs.go:276] 2 containers: [9859f21707de cbd2ba51cafa]
	I0429 04:59:15.078361    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 04:59:15.090363    8269 logs.go:276] 1 containers: [1cab5e65caa3]
	I0429 04:59:15.090433    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 04:59:15.102002    8269 logs.go:276] 2 containers: [951503dd4353 337d2bfa0452]
	I0429 04:59:15.102074    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 04:59:15.114622    8269 logs.go:276] 0 containers: []
	W0429 04:59:15.114638    8269 logs.go:278] No container was found matching "kindnet"
	I0429 04:59:15.114717    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 04:59:15.125745    8269 logs.go:276] 2 containers: [610ab484c7e0 164f3ba6e510]
	I0429 04:59:15.125761    8269 logs.go:123] Gathering logs for etcd [2ab3e4df63bd] ...
	I0429 04:59:15.125767    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ab3e4df63bd"
	I0429 04:59:15.140333    8269 logs.go:123] Gathering logs for kube-controller-manager [951503dd4353] ...
	I0429 04:59:15.140344    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 951503dd4353"
	I0429 04:59:15.158398    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 04:59:15.158413    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 04:59:15.195453    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 04:59:15.195552    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 04:59:15.196305    8269 logs.go:123] Gathering logs for kube-apiserver [20e5fdcd56e8] ...
	I0429 04:59:15.196312    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e5fdcd56e8"
	I0429 04:59:15.210367    8269 logs.go:123] Gathering logs for kube-proxy [1cab5e65caa3] ...
	I0429 04:59:15.210378    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cab5e65caa3"
	I0429 04:59:15.222597    8269 logs.go:123] Gathering logs for storage-provisioner [164f3ba6e510] ...
	I0429 04:59:15.222609    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 164f3ba6e510"
	I0429 04:59:15.234750    8269 logs.go:123] Gathering logs for container status ...
	I0429 04:59:15.234762    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 04:59:15.247005    8269 logs.go:123] Gathering logs for etcd [d445bd598284] ...
	I0429 04:59:15.247017    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d445bd598284"
	I0429 04:59:15.261817    8269 logs.go:123] Gathering logs for kube-scheduler [9859f21707de] ...
	I0429 04:59:15.261827    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9859f21707de"
	I0429 04:59:15.275726    8269 logs.go:123] Gathering logs for kube-apiserver [fa1182944783] ...
	I0429 04:59:15.275737    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1182944783"
	I0429 04:59:15.295821    8269 logs.go:123] Gathering logs for coredns [bad32a53115b] ...
	I0429 04:59:15.295833    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bad32a53115b"
	I0429 04:59:15.307652    8269 logs.go:123] Gathering logs for Docker ...
	I0429 04:59:15.307664    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 04:59:15.333191    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 04:59:15.333200    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 04:59:15.337687    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 04:59:15.337693    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 04:59:15.372986    8269 logs.go:123] Gathering logs for storage-provisioner [610ab484c7e0] ...
	I0429 04:59:15.372997    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 610ab484c7e0"
	I0429 04:59:15.385029    8269 logs.go:123] Gathering logs for kube-scheduler [cbd2ba51cafa] ...
	I0429 04:59:15.385039    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbd2ba51cafa"
	I0429 04:59:15.401222    8269 logs.go:123] Gathering logs for kube-controller-manager [337d2bfa0452] ...
	I0429 04:59:15.401233    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 337d2bfa0452"
	I0429 04:59:15.412774    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 04:59:15.412786    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 04:59:15.412813    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 04:59:15.412818    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 04:59:15.412822    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 04:59:15.412826    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 04:59:15.412829    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:59:25.416687    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 04:59:30.418753    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 04:59:30.418969    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 04:59:30.434324    8269 logs.go:276] 2 containers: [20e5fdcd56e8 fa1182944783]
	I0429 04:59:30.434409    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 04:59:30.445844    8269 logs.go:276] 2 containers: [2ab3e4df63bd d445bd598284]
	I0429 04:59:30.445918    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 04:59:30.458350    8269 logs.go:276] 1 containers: [bad32a53115b]
	I0429 04:59:30.458426    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 04:59:30.469979    8269 logs.go:276] 2 containers: [9859f21707de cbd2ba51cafa]
	I0429 04:59:30.470057    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 04:59:30.490764    8269 logs.go:276] 1 containers: [1cab5e65caa3]
	I0429 04:59:30.490839    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 04:59:30.506032    8269 logs.go:276] 2 containers: [951503dd4353 337d2bfa0452]
	I0429 04:59:30.506108    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 04:59:30.517296    8269 logs.go:276] 0 containers: []
	W0429 04:59:30.517309    8269 logs.go:278] No container was found matching "kindnet"
	I0429 04:59:30.517371    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 04:59:30.529843    8269 logs.go:276] 2 containers: [610ab484c7e0 164f3ba6e510]
	I0429 04:59:30.529864    8269 logs.go:123] Gathering logs for kube-apiserver [fa1182944783] ...
	I0429 04:59:30.529869    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1182944783"
	I0429 04:59:30.551818    8269 logs.go:123] Gathering logs for storage-provisioner [164f3ba6e510] ...
	I0429 04:59:30.551828    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 164f3ba6e510"
	I0429 04:59:30.563445    8269 logs.go:123] Gathering logs for etcd [d445bd598284] ...
	I0429 04:59:30.563458    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d445bd598284"
	I0429 04:59:30.578864    8269 logs.go:123] Gathering logs for kube-scheduler [cbd2ba51cafa] ...
	I0429 04:59:30.578880    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbd2ba51cafa"
	I0429 04:59:30.594963    8269 logs.go:123] Gathering logs for kube-controller-manager [951503dd4353] ...
	I0429 04:59:30.594977    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 951503dd4353"
	I0429 04:59:30.611924    8269 logs.go:123] Gathering logs for storage-provisioner [610ab484c7e0] ...
	I0429 04:59:30.611936    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 610ab484c7e0"
	I0429 04:59:30.623363    8269 logs.go:123] Gathering logs for Docker ...
	I0429 04:59:30.623377    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 04:59:30.647774    8269 logs.go:123] Gathering logs for container status ...
	I0429 04:59:30.647784    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 04:59:30.660820    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 04:59:30.660833    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 04:59:30.701283    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 04:59:30.701463    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 04:59:30.702254    8269 logs.go:123] Gathering logs for etcd [2ab3e4df63bd] ...
	I0429 04:59:30.702264    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ab3e4df63bd"
	I0429 04:59:30.716929    8269 logs.go:123] Gathering logs for kube-controller-manager [337d2bfa0452] ...
	I0429 04:59:30.716941    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 337d2bfa0452"
	I0429 04:59:30.729756    8269 logs.go:123] Gathering logs for kube-apiserver [20e5fdcd56e8] ...
	I0429 04:59:30.729769    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e5fdcd56e8"
	I0429 04:59:30.744827    8269 logs.go:123] Gathering logs for kube-proxy [1cab5e65caa3] ...
	I0429 04:59:30.744844    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cab5e65caa3"
	I0429 04:59:30.758076    8269 logs.go:123] Gathering logs for coredns [bad32a53115b] ...
	I0429 04:59:30.758088    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bad32a53115b"
	I0429 04:59:30.770643    8269 logs.go:123] Gathering logs for kube-scheduler [9859f21707de] ...
	I0429 04:59:30.770655    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9859f21707de"
	I0429 04:59:30.784635    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 04:59:30.784648    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 04:59:30.790042    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 04:59:30.790054    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 04:59:30.826731    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 04:59:30.826744    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 04:59:30.826770    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 04:59:30.826775    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 04:59:30.826787    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 04:59:30.826829    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 04:59:30.826863    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:59:40.831186    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 04:59:45.833875    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 04:59:45.834293    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 04:59:45.874907    8269 logs.go:276] 2 containers: [20e5fdcd56e8 fa1182944783]
	I0429 04:59:45.875048    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 04:59:45.899331    8269 logs.go:276] 2 containers: [2ab3e4df63bd d445bd598284]
	I0429 04:59:45.899428    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 04:59:45.914201    8269 logs.go:276] 1 containers: [bad32a53115b]
	I0429 04:59:45.914280    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 04:59:45.928091    8269 logs.go:276] 2 containers: [9859f21707de cbd2ba51cafa]
	I0429 04:59:45.928167    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 04:59:45.938897    8269 logs.go:276] 1 containers: [1cab5e65caa3]
	I0429 04:59:45.938964    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 04:59:45.949588    8269 logs.go:276] 2 containers: [951503dd4353 337d2bfa0452]
	I0429 04:59:45.949658    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 04:59:45.960200    8269 logs.go:276] 0 containers: []
	W0429 04:59:45.960211    8269 logs.go:278] No container was found matching "kindnet"
	I0429 04:59:45.960262    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 04:59:45.970888    8269 logs.go:276] 2 containers: [610ab484c7e0 164f3ba6e510]
	I0429 04:59:45.970906    8269 logs.go:123] Gathering logs for storage-provisioner [610ab484c7e0] ...
	I0429 04:59:45.970912    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 610ab484c7e0"
	I0429 04:59:45.984053    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 04:59:45.984065    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 04:59:46.017899    8269 logs.go:123] Gathering logs for kube-apiserver [20e5fdcd56e8] ...
	I0429 04:59:46.017911    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e5fdcd56e8"
	I0429 04:59:46.033507    8269 logs.go:123] Gathering logs for etcd [2ab3e4df63bd] ...
	I0429 04:59:46.033518    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ab3e4df63bd"
	I0429 04:59:46.048042    8269 logs.go:123] Gathering logs for kube-scheduler [9859f21707de] ...
	I0429 04:59:46.048052    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9859f21707de"
	I0429 04:59:46.059849    8269 logs.go:123] Gathering logs for kube-controller-manager [951503dd4353] ...
	I0429 04:59:46.059860    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 951503dd4353"
	I0429 04:59:46.077133    8269 logs.go:123] Gathering logs for container status ...
	I0429 04:59:46.077144    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 04:59:46.089133    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 04:59:46.089147    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 04:59:46.127039    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 04:59:46.127143    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 04:59:46.127884    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 04:59:46.127888    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 04:59:46.133381    8269 logs.go:123] Gathering logs for kube-apiserver [fa1182944783] ...
	I0429 04:59:46.133390    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1182944783"
	I0429 04:59:46.153045    8269 logs.go:123] Gathering logs for kube-scheduler [cbd2ba51cafa] ...
	I0429 04:59:46.153055    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbd2ba51cafa"
	I0429 04:59:46.168332    8269 logs.go:123] Gathering logs for kube-proxy [1cab5e65caa3] ...
	I0429 04:59:46.168345    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cab5e65caa3"
	I0429 04:59:46.184386    8269 logs.go:123] Gathering logs for Docker ...
	I0429 04:59:46.184399    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 04:59:46.207162    8269 logs.go:123] Gathering logs for etcd [d445bd598284] ...
	I0429 04:59:46.207169    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d445bd598284"
	I0429 04:59:46.221034    8269 logs.go:123] Gathering logs for coredns [bad32a53115b] ...
	I0429 04:59:46.221045    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bad32a53115b"
	I0429 04:59:46.233488    8269 logs.go:123] Gathering logs for kube-controller-manager [337d2bfa0452] ...
	I0429 04:59:46.233500    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 337d2bfa0452"
	I0429 04:59:46.244749    8269 logs.go:123] Gathering logs for storage-provisioner [164f3ba6e510] ...
	I0429 04:59:46.244761    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 164f3ba6e510"
	I0429 04:59:46.256032    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 04:59:46.256044    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 04:59:46.256077    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 04:59:46.256082    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 04:59:46.256085    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 04:59:46.256089    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 04:59:46.256091    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:59:56.258520    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:01.260767    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:01.260986    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:00:01.275935    8269 logs.go:276] 2 containers: [20e5fdcd56e8 fa1182944783]
	I0429 05:00:01.276020    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:00:01.287256    8269 logs.go:276] 2 containers: [2ab3e4df63bd d445bd598284]
	I0429 05:00:01.287327    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:00:01.297259    8269 logs.go:276] 1 containers: [bad32a53115b]
	I0429 05:00:01.297329    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:00:01.307446    8269 logs.go:276] 2 containers: [9859f21707de cbd2ba51cafa]
	I0429 05:00:01.307516    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:00:01.322686    8269 logs.go:276] 1 containers: [1cab5e65caa3]
	I0429 05:00:01.322756    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:00:01.333263    8269 logs.go:276] 2 containers: [951503dd4353 337d2bfa0452]
	I0429 05:00:01.333336    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:00:01.344490    8269 logs.go:276] 0 containers: []
	W0429 05:00:01.344507    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:00:01.344566    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:00:01.355541    8269 logs.go:276] 2 containers: [610ab484c7e0 164f3ba6e510]
	I0429 05:00:01.355560    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:00:01.355567    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:00:01.360183    8269 logs.go:123] Gathering logs for kube-scheduler [cbd2ba51cafa] ...
	I0429 05:00:01.360190    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbd2ba51cafa"
	I0429 05:00:01.374975    8269 logs.go:123] Gathering logs for kube-controller-manager [337d2bfa0452] ...
	I0429 05:00:01.374986    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 337d2bfa0452"
	I0429 05:00:01.386125    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:00:01.386136    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:00:01.421351    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:00:01.421442    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:00:01.422148    8269 logs.go:123] Gathering logs for kube-apiserver [fa1182944783] ...
	I0429 05:00:01.422151    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1182944783"
	I0429 05:00:01.441603    8269 logs.go:123] Gathering logs for kube-proxy [1cab5e65caa3] ...
	I0429 05:00:01.441616    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cab5e65caa3"
	I0429 05:00:01.452988    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:00:01.453000    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:00:01.477331    8269 logs.go:123] Gathering logs for kube-apiserver [20e5fdcd56e8] ...
	I0429 05:00:01.477337    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e5fdcd56e8"
	I0429 05:00:01.491158    8269 logs.go:123] Gathering logs for etcd [d445bd598284] ...
	I0429 05:00:01.491170    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d445bd598284"
	I0429 05:00:01.506458    8269 logs.go:123] Gathering logs for kube-scheduler [9859f21707de] ...
	I0429 05:00:01.506469    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9859f21707de"
	I0429 05:00:01.522288    8269 logs.go:123] Gathering logs for kube-controller-manager [951503dd4353] ...
	I0429 05:00:01.522298    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 951503dd4353"
	I0429 05:00:01.539077    8269 logs.go:123] Gathering logs for storage-provisioner [610ab484c7e0] ...
	I0429 05:00:01.539085    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 610ab484c7e0"
	I0429 05:00:01.550790    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:00:01.550801    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:00:01.562363    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:00:01.562373    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:00:01.596196    8269 logs.go:123] Gathering logs for etcd [2ab3e4df63bd] ...
	I0429 05:00:01.596205    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ab3e4df63bd"
	I0429 05:00:01.610564    8269 logs.go:123] Gathering logs for coredns [bad32a53115b] ...
	I0429 05:00:01.610572    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bad32a53115b"
	I0429 05:00:01.621612    8269 logs.go:123] Gathering logs for storage-provisioner [164f3ba6e510] ...
	I0429 05:00:01.621623    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 164f3ba6e510"
	I0429 05:00:01.632820    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:00:01.632830    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:00:01.632857    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 05:00:01.632861    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:00:01.632864    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:00:01.632871    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:00:01.632930    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:00:11.637022    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:16.639258    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:16.639467    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:00:16.651170    8269 logs.go:276] 2 containers: [20e5fdcd56e8 fa1182944783]
	I0429 05:00:16.651248    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:00:16.667290    8269 logs.go:276] 2 containers: [2ab3e4df63bd d445bd598284]
	I0429 05:00:16.667364    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:00:16.678367    8269 logs.go:276] 1 containers: [bad32a53115b]
	I0429 05:00:16.678441    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:00:16.688956    8269 logs.go:276] 2 containers: [9859f21707de cbd2ba51cafa]
	I0429 05:00:16.689027    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:00:16.699692    8269 logs.go:276] 1 containers: [1cab5e65caa3]
	I0429 05:00:16.699763    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:00:16.715073    8269 logs.go:276] 2 containers: [951503dd4353 337d2bfa0452]
	I0429 05:00:16.715146    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:00:16.725770    8269 logs.go:276] 0 containers: []
	W0429 05:00:16.725782    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:00:16.725843    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:00:16.739862    8269 logs.go:276] 2 containers: [610ab484c7e0 164f3ba6e510]
	I0429 05:00:16.739882    8269 logs.go:123] Gathering logs for kube-apiserver [20e5fdcd56e8] ...
	I0429 05:00:16.739888    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e5fdcd56e8"
	I0429 05:00:16.754651    8269 logs.go:123] Gathering logs for kube-scheduler [9859f21707de] ...
	I0429 05:00:16.754660    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9859f21707de"
	I0429 05:00:16.766338    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:00:16.766350    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:00:16.783390    8269 logs.go:123] Gathering logs for kube-controller-manager [337d2bfa0452] ...
	I0429 05:00:16.783400    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 337d2bfa0452"
	I0429 05:00:16.795034    8269 logs.go:123] Gathering logs for storage-provisioner [610ab484c7e0] ...
	I0429 05:00:16.795046    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 610ab484c7e0"
	I0429 05:00:16.807289    8269 logs.go:123] Gathering logs for kube-proxy [1cab5e65caa3] ...
	I0429 05:00:16.807300    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cab5e65caa3"
	I0429 05:00:16.819348    8269 logs.go:123] Gathering logs for kube-controller-manager [951503dd4353] ...
	I0429 05:00:16.819358    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 951503dd4353"
	I0429 05:00:16.843573    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:00:16.843585    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:00:16.881215    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:00:16.881310    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:00:16.882070    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:00:16.882078    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:00:16.917662    8269 logs.go:123] Gathering logs for etcd [2ab3e4df63bd] ...
	I0429 05:00:16.917672    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ab3e4df63bd"
	I0429 05:00:16.931507    8269 logs.go:123] Gathering logs for etcd [d445bd598284] ...
	I0429 05:00:16.931518    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d445bd598284"
	I0429 05:00:16.945537    8269 logs.go:123] Gathering logs for coredns [bad32a53115b] ...
	I0429 05:00:16.945548    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bad32a53115b"
	I0429 05:00:16.957688    8269 logs.go:123] Gathering logs for kube-scheduler [cbd2ba51cafa] ...
	I0429 05:00:16.957699    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbd2ba51cafa"
	I0429 05:00:16.973576    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:00:16.973586    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:00:16.978104    8269 logs.go:123] Gathering logs for kube-apiserver [fa1182944783] ...
	I0429 05:00:16.978115    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1182944783"
	I0429 05:00:16.997987    8269 logs.go:123] Gathering logs for storage-provisioner [164f3ba6e510] ...
	I0429 05:00:16.997997    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 164f3ba6e510"
	I0429 05:00:17.009117    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:00:17.009128    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:00:17.032881    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:00:17.032889    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:00:17.032912    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 05:00:17.032916    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:00:17.032920    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:00:17.032925    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:00:17.032927    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:00:27.037071    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:32.039863    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:32.039963    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:00:32.051427    8269 logs.go:276] 2 containers: [20e5fdcd56e8 fa1182944783]
	I0429 05:00:32.051494    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:00:32.063565    8269 logs.go:276] 2 containers: [2ab3e4df63bd d445bd598284]
	I0429 05:00:32.063630    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:00:32.073631    8269 logs.go:276] 1 containers: [bad32a53115b]
	I0429 05:00:32.073691    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:00:32.084353    8269 logs.go:276] 2 containers: [9859f21707de cbd2ba51cafa]
	I0429 05:00:32.084408    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:00:32.095003    8269 logs.go:276] 1 containers: [1cab5e65caa3]
	I0429 05:00:32.095071    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:00:32.105874    8269 logs.go:276] 2 containers: [951503dd4353 337d2bfa0452]
	I0429 05:00:32.105944    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:00:32.116523    8269 logs.go:276] 0 containers: []
	W0429 05:00:32.116537    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:00:32.116590    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:00:32.127560    8269 logs.go:276] 2 containers: [610ab484c7e0 164f3ba6e510]
	I0429 05:00:32.127576    8269 logs.go:123] Gathering logs for etcd [2ab3e4df63bd] ...
	I0429 05:00:32.127581    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ab3e4df63bd"
	I0429 05:00:32.142656    8269 logs.go:123] Gathering logs for coredns [bad32a53115b] ...
	I0429 05:00:32.142671    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bad32a53115b"
	I0429 05:00:32.154465    8269 logs.go:123] Gathering logs for storage-provisioner [164f3ba6e510] ...
	I0429 05:00:32.154476    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 164f3ba6e510"
	I0429 05:00:32.165845    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:00:32.165856    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:00:32.189576    8269 logs.go:123] Gathering logs for kube-scheduler [9859f21707de] ...
	I0429 05:00:32.189593    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9859f21707de"
	I0429 05:00:32.202058    8269 logs.go:123] Gathering logs for kube-scheduler [cbd2ba51cafa] ...
	I0429 05:00:32.202072    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbd2ba51cafa"
	I0429 05:00:32.217165    8269 logs.go:123] Gathering logs for kube-proxy [1cab5e65caa3] ...
	I0429 05:00:32.217176    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cab5e65caa3"
	I0429 05:00:32.228964    8269 logs.go:123] Gathering logs for storage-provisioner [610ab484c7e0] ...
	I0429 05:00:32.228974    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 610ab484c7e0"
	I0429 05:00:32.240417    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:00:32.240431    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:00:32.278864    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:00:32.278959    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:00:32.279706    8269 logs.go:123] Gathering logs for kube-apiserver [20e5fdcd56e8] ...
	I0429 05:00:32.279712    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e5fdcd56e8"
	I0429 05:00:32.295573    8269 logs.go:123] Gathering logs for etcd [d445bd598284] ...
	I0429 05:00:32.295586    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d445bd598284"
	I0429 05:00:32.313934    8269 logs.go:123] Gathering logs for kube-controller-manager [951503dd4353] ...
	I0429 05:00:32.313944    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 951503dd4353"
	I0429 05:00:32.331402    8269 logs.go:123] Gathering logs for kube-controller-manager [337d2bfa0452] ...
	I0429 05:00:32.331416    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 337d2bfa0452"
	I0429 05:00:32.343725    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:00:32.343738    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:00:32.349931    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:00:32.349941    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:00:32.384209    8269 logs.go:123] Gathering logs for kube-apiserver [fa1182944783] ...
	I0429 05:00:32.384221    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1182944783"
	I0429 05:00:32.406472    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:00:32.406483    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:00:32.418061    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:00:32.418071    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:00:32.418096    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 05:00:32.418100    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:00:32.418103    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:00:32.418114    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:00:32.418118    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:00:42.422312    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:47.424671    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:47.424711    8269 kubeadm.go:591] duration metric: took 4m8.729023542s to restartPrimaryControlPlane
	W0429 05:00:47.424753    8269 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 05:00:47.424769    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0429 05:00:48.395684    8269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 05:00:48.400508    8269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 05:00:48.403302    8269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 05:00:48.406071    8269 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 05:00:48.406078    8269 kubeadm.go:156] found existing configuration files:
	
	I0429 05:00:48.406099    8269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/admin.conf
	I0429 05:00:48.408660    8269 kubeadm.go:162] "https://control-plane.minikube.internal:51195" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 05:00:48.408688    8269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 05:00:48.411765    8269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/kubelet.conf
	I0429 05:00:48.414330    8269 kubeadm.go:162] "https://control-plane.minikube.internal:51195" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 05:00:48.414355    8269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 05:00:48.417709    8269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/controller-manager.conf
	I0429 05:00:48.420735    8269 kubeadm.go:162] "https://control-plane.minikube.internal:51195" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 05:00:48.420757    8269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 05:00:48.423767    8269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/scheduler.conf
	I0429 05:00:48.426428    8269 kubeadm.go:162] "https://control-plane.minikube.internal:51195" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 05:00:48.426454    8269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 05:00:48.429501    8269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 05:00:48.446003    8269 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0429 05:00:48.446036    8269 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 05:00:48.498108    8269 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 05:00:48.498236    8269 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 05:00:48.498326    8269 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 05:00:48.547913    8269 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 05:00:48.553113    8269 out.go:204]   - Generating certificates and keys ...
	I0429 05:00:48.553144    8269 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 05:00:48.553192    8269 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 05:00:48.553232    8269 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 05:00:48.553289    8269 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 05:00:48.553322    8269 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 05:00:48.553378    8269 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 05:00:48.553410    8269 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 05:00:48.553445    8269 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 05:00:48.553515    8269 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 05:00:48.553576    8269 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 05:00:48.553594    8269 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 05:00:48.553620    8269 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 05:00:48.727227    8269 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 05:00:48.902568    8269 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 05:00:48.983664    8269 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 05:00:49.155668    8269 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 05:00:49.185085    8269 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 05:00:49.185476    8269 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 05:00:49.185546    8269 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 05:00:49.274525    8269 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 05:00:49.278327    8269 out.go:204]   - Booting up control plane ...
	I0429 05:00:49.278389    8269 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 05:00:49.279493    8269 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 05:00:49.279527    8269 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 05:00:49.279565    8269 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 05:00:49.279692    8269 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 05:00:53.782149    8269 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.504308 seconds
	I0429 05:00:53.782209    8269 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 05:00:53.787843    8269 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 05:00:54.312789    8269 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 05:00:54.313081    8269 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-310000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 05:00:54.817550    8269 kubeadm.go:309] [bootstrap-token] Using token: 9k8pha.lxg4q7zdgu456eb0
	I0429 05:00:54.821531    8269 out.go:204]   - Configuring RBAC rules ...
	I0429 05:00:54.821612    8269 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 05:00:54.822412    8269 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 05:00:54.828623    8269 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 05:00:54.829698    8269 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 05:00:54.830666    8269 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 05:00:54.831641    8269 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 05:00:54.835509    8269 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 05:00:54.996704    8269 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 05:00:55.224168    8269 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 05:00:55.224578    8269 kubeadm.go:309] 
	I0429 05:00:55.224629    8269 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 05:00:55.224649    8269 kubeadm.go:309] 
	I0429 05:00:55.224715    8269 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 05:00:55.224732    8269 kubeadm.go:309] 
	I0429 05:00:55.224800    8269 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 05:00:55.224832    8269 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 05:00:55.224858    8269 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 05:00:55.224860    8269 kubeadm.go:309] 
	I0429 05:00:55.224888    8269 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 05:00:55.224890    8269 kubeadm.go:309] 
	I0429 05:00:55.224912    8269 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 05:00:55.224914    8269 kubeadm.go:309] 
	I0429 05:00:55.224955    8269 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 05:00:55.225035    8269 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 05:00:55.225083    8269 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 05:00:55.225090    8269 kubeadm.go:309] 
	I0429 05:00:55.225129    8269 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 05:00:55.225180    8269 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 05:00:55.225186    8269 kubeadm.go:309] 
	I0429 05:00:55.225249    8269 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 9k8pha.lxg4q7zdgu456eb0 \
	I0429 05:00:55.225345    8269 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4832dc51ff6d0e6d2b485eb727ddc01b0033877744e5e13a6c0f8b67a1b7145 \
	I0429 05:00:55.225360    8269 kubeadm.go:309] 	--control-plane 
	I0429 05:00:55.225368    8269 kubeadm.go:309] 
	I0429 05:00:55.225412    8269 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 05:00:55.225417    8269 kubeadm.go:309] 
	I0429 05:00:55.225483    8269 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 9k8pha.lxg4q7zdgu456eb0 \
	I0429 05:00:55.225572    8269 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4832dc51ff6d0e6d2b485eb727ddc01b0033877744e5e13a6c0f8b67a1b7145 
	I0429 05:00:55.225644    8269 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 05:00:55.225651    8269 cni.go:84] Creating CNI manager for ""
	I0429 05:00:55.225658    8269 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 05:00:55.229902    8269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 05:00:55.236884    8269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 05:00:55.239905    8269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 05:00:55.244630    8269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 05:00:55.244673    8269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 05:00:55.244703    8269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-310000 minikube.k8s.io/updated_at=2024_04_29T05_00_55_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=94a01df0d4b48636d4af3d06a53be687e06c0844 minikube.k8s.io/name=running-upgrade-310000 minikube.k8s.io/primary=true
	I0429 05:00:55.290501    8269 ops.go:34] apiserver oom_adj: -16
	I0429 05:00:55.290557    8269 kubeadm.go:1107] duration metric: took 45.920292ms to wait for elevateKubeSystemPrivileges
	W0429 05:00:55.290575    8269 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 05:00:55.290579    8269 kubeadm.go:393] duration metric: took 4m16.60843675s to StartCluster
	I0429 05:00:55.290587    8269 settings.go:142] acquiring lock: {Name:mka93054a23bdbf29aca25affe181be869710883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:00:55.290745    8269 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:00:55.291154    8269 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/kubeconfig: {Name:mkc4105502c44b2331a2dd91226134a74ad93594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:00:55.291402    8269 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:00:55.294972    8269 out.go:177] * Verifying Kubernetes components...
	I0429 05:00:55.291418    8269 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 05:00:55.291585    8269 config.go:182] Loaded profile config "running-upgrade-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 05:00:55.302878    8269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 05:00:55.302878    8269 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-310000"
	I0429 05:00:55.302877    8269 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-310000"
	I0429 05:00:55.302907    8269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-310000"
	I0429 05:00:55.302917    8269 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-310000"
	W0429 05:00:55.302921    8269 addons.go:243] addon storage-provisioner should already be in state true
	I0429 05:00:55.302934    8269 host.go:66] Checking if "running-upgrade-310000" exists ...
	I0429 05:00:55.303973    8269 kapi.go:59] client config for running-upgrade-310000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/client.key", CAFile:"/Users/jenkins/minikube-integration/18771-6092/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101953cb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 05:00:55.304352    8269 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-310000"
	W0429 05:00:55.304356    8269 addons.go:243] addon default-storageclass should already be in state true
	I0429 05:00:55.304363    8269 host.go:66] Checking if "running-upgrade-310000" exists ...
	I0429 05:00:55.309683    8269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 05:00:55.313971    8269 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 05:00:55.313978    8269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 05:00:55.313993    8269 sshutil.go:53] new ssh client: &{IP:localhost Port:51162 SSHKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/running-upgrade-310000/id_rsa Username:docker}
	I0429 05:00:55.314710    8269 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 05:00:55.314714    8269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 05:00:55.314717    8269 sshutil.go:53] new ssh client: &{IP:localhost Port:51162 SSHKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/running-upgrade-310000/id_rsa Username:docker}
	I0429 05:00:55.399111    8269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 05:00:55.403672    8269 api_server.go:52] waiting for apiserver process to appear ...
	I0429 05:00:55.403713    8269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 05:00:55.407397    8269 api_server.go:72] duration metric: took 115.986125ms to wait for apiserver process to appear ...
	I0429 05:00:55.407405    8269 api_server.go:88] waiting for apiserver healthz status ...
	I0429 05:00:55.407411    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:55.458166    8269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 05:00:55.461059    8269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 05:01:00.409271    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:00.409316    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:05.409687    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:05.409752    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:10.410088    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:10.410107    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:15.410497    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:15.410560    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:20.411059    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:20.411091    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:25.411752    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:25.411806    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0429 05:01:25.811154    8269 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0429 05:01:25.819318    8269 out.go:177] * Enabled addons: storage-provisioner
	I0429 05:01:25.827272    8269 addons.go:505] duration metric: took 30.535929958s for enable addons: enabled=[storage-provisioner]
	I0429 05:01:30.412687    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:30.412730    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:35.413852    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:35.413901    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:40.415253    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:40.415279    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:45.416985    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:45.417034    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:50.419343    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:50.419389    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:55.421650    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:55.421825    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:01:55.437130    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:01:55.437199    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:01:55.449449    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:01:55.449525    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:01:55.460178    8269 logs.go:276] 2 containers: [8b104f5475d1 cf01071a108e]
	I0429 05:01:55.460247    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:01:55.471253    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:01:55.471317    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:01:55.493896    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:01:55.493965    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:01:55.508780    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:01:55.508861    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:01:55.519221    8269 logs.go:276] 0 containers: []
	W0429 05:01:55.519236    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:01:55.519300    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:01:55.529690    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:01:55.529706    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:01:55.529712    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:01:55.534245    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:01:55.534254    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:01:55.568824    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:01:55.568837    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:01:55.585667    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:01:55.585679    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:01:55.597098    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:01:55.597109    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:01:55.615666    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:01:55.615679    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:01:55.627051    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:01:55.627062    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:01:55.644161    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:01:55.644254    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:01:55.660811    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:01:55.660818    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:01:55.674372    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:01:55.674387    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:01:55.685782    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:01:55.685794    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:01:55.697108    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:01:55.697118    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:01:55.719458    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:01:55.719468    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:01:55.737133    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:01:55.737144    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:01:55.760462    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:01:55.760471    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:01:55.760507    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 05:01:55.760511    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:01:55.760514    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:01:55.760519    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:01:55.760522    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:02:05.764680    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:10.767009    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:10.767191    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:10.783086    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:02:10.783175    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:10.795883    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:02:10.795961    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:10.806970    8269 logs.go:276] 2 containers: [8b104f5475d1 cf01071a108e]
	I0429 05:02:10.807037    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:10.817513    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:02:10.817583    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:10.827455    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:02:10.827519    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:10.837783    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:02:10.837854    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:10.847667    8269 logs.go:276] 0 containers: []
	W0429 05:02:10.847679    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:10.847736    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:10.862275    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:02:10.862288    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:10.862293    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:10.867109    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:02:10.867117    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:02:10.878570    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:02:10.878581    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:02:10.889945    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:02:10.889959    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:02:10.904166    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:10.904176    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:10.928715    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:02:10.928722    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:10.940433    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:10.940444    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:02:10.958719    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:02:10.958842    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:02:10.975318    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:10.975325    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:02:11.011273    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:02:11.011284    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:02:11.025673    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:02:11.025683    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:02:11.039487    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:02:11.039498    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:02:11.050804    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:02:11.050816    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:02:11.068546    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:02:11.068557    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:02:11.085547    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:11.085559    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:02:11.085585    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 05:02:11.085590    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:02:11.085597    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:02:11.085601    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:11.085605    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:02:21.088597    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:26.090914    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:26.091093    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:26.108446    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:02:26.108529    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:26.123208    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:02:26.123282    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:26.135177    8269 logs.go:276] 2 containers: [8b104f5475d1 cf01071a108e]
	I0429 05:02:26.135248    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:26.145196    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:02:26.145261    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:26.156996    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:02:26.157065    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:26.167815    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:02:26.167885    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:26.178587    8269 logs.go:276] 0 containers: []
	W0429 05:02:26.178597    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:26.178654    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:26.188966    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:02:26.188983    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:02:26.188989    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:02:26.203343    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:02:26.203352    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:02:26.217970    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:02:26.217980    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:02:26.229450    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:02:26.229460    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:02:26.248357    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:02:26.248368    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:02:26.259788    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:02:26.259803    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:26.271099    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:26.271112    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:02:26.288279    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:02:26.288376    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:02:26.304559    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:26.304565    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:02:26.339786    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:02:26.339801    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:02:26.353399    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:02:26.353410    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:02:26.371195    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:02:26.371204    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:02:26.388935    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:26.388945    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:26.411990    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:26.411998    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:26.416175    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:26.416185    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:02:26.416209    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 05:02:26.416213    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:02:26.416217    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:02:26.416220    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:26.416223    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:02:36.420099    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:41.422486    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:41.422896    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:41.473895    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:02:41.474016    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:41.497410    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:02:41.497489    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:41.511379    8269 logs.go:276] 2 containers: [8b104f5475d1 cf01071a108e]
	I0429 05:02:41.511458    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:41.522019    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:02:41.522085    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:41.534587    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:02:41.534661    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:41.547273    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:02:41.547338    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:41.561433    8269 logs.go:276] 0 containers: []
	W0429 05:02:41.561444    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:41.561504    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:41.572044    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:02:41.572058    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:41.572063    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:41.576607    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:41.576616    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:02:41.645080    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:02:41.645092    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:02:41.661207    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:02:41.661219    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:02:41.672557    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:02:41.672569    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:02:41.684613    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:02:41.684628    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:02:41.705877    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:02:41.705888    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:02:41.717121    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:41.717134    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:41.742144    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:41.742155    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:02:41.759969    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:02:41.760064    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:02:41.776832    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:02:41.776841    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:02:41.798418    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:02:41.798430    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:02:41.813122    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:02:41.813133    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:02:41.828161    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:02:41.828179    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:41.839330    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:41.839339    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:02:41.839366    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 05:02:41.839374    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:02:41.839378    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:02:41.839382    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:41.839476    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:02:51.843625    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:56.846532    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:56.846877    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:56.882834    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:02:56.882967    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:56.914232    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:02:56.914311    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:56.927289    8269 logs.go:276] 2 containers: [8b104f5475d1 cf01071a108e]
	I0429 05:02:56.927363    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:56.938907    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:02:56.938971    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:56.949616    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:02:56.949680    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:56.960223    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:02:56.960292    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:56.970831    8269 logs.go:276] 0 containers: []
	W0429 05:02:56.970846    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:56.970901    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:56.981465    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:02:56.981480    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:56.981484    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:57.006889    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:57.006899    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:02:57.043427    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:02:57.043442    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:02:57.057674    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:02:57.057687    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:02:57.071868    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:02:57.071878    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:02:57.084266    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:02:57.084277    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:02:57.098270    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:02:57.098284    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:02:57.116529    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:02:57.116538    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:02:57.128306    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:02:57.128318    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:57.139790    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:57.139800    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:02:57.158828    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:02:57.158923    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:02:57.175348    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:57.175355    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:57.180222    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:02:57.180231    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:02:57.192230    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:02:57.192243    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:02:57.207399    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:57.207408    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:02:57.207432    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 05:02:57.207436    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:02:57.207440    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:02:57.207444    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:57.207447    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:03:07.210673    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:12.212978    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:03:12.213390    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:03:12.251919    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:03:12.252057    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:03:12.273512    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:03:12.273631    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:03:12.288414    8269 logs.go:276] 4 containers: [d67d6506457f 279483b19c81 8b104f5475d1 cf01071a108e]
	I0429 05:03:12.288487    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:03:12.300929    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:03:12.300992    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:03:12.312191    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:03:12.312262    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:03:12.323612    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:03:12.323683    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:03:12.334085    8269 logs.go:276] 0 containers: []
	W0429 05:03:12.334098    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:03:12.334161    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:03:12.349274    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:03:12.349292    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:03:12.349298    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:03:12.368217    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:03:12.368310    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:03:12.384614    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:03:12.384620    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:03:12.389280    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:03:12.389290    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:03:12.401308    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:03:12.401319    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:03:12.424746    8269 logs.go:123] Gathering logs for coredns [279483b19c81] ...
	I0429 05:03:12.424757    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483b19c81"
	I0429 05:03:12.435969    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:03:12.435981    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:03:12.448026    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:03:12.448037    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:03:12.460228    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:03:12.460243    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:03:12.473784    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:03:12.473798    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:03:12.488018    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:03:12.488027    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:03:12.504224    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:03:12.504239    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:03:12.525290    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:03:12.525305    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:12.560335    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:03:12.560347    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:03:12.574825    8269 logs.go:123] Gathering logs for coredns [d67d6506457f] ...
	I0429 05:03:12.574835    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67d6506457f"
	I0429 05:03:12.593783    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:03:12.593795    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:03:12.611090    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:03:12.611099    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:03:12.611124    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 05:03:12.611129    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:03:12.611133    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:03:12.611157    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:03:12.611162    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:03:22.615366    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:27.617810    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:03:27.617963    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:03:27.629435    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:03:27.629513    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:03:27.639830    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:03:27.639897    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:03:27.650611    8269 logs.go:276] 4 containers: [d67d6506457f 279483b19c81 8b104f5475d1 cf01071a108e]
	I0429 05:03:27.650687    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:03:27.661303    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:03:27.661376    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:03:27.671427    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:03:27.671485    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:03:27.681771    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:03:27.681840    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:03:27.696255    8269 logs.go:276] 0 containers: []
	W0429 05:03:27.696266    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:03:27.696321    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:03:27.709676    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:03:27.709695    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:03:27.709700    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:27.745207    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:03:27.745228    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:03:27.759518    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:03:27.759529    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:03:27.773896    8269 logs.go:123] Gathering logs for coredns [d67d6506457f] ...
	I0429 05:03:27.773907    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67d6506457f"
	I0429 05:03:27.785586    8269 logs.go:123] Gathering logs for coredns [279483b19c81] ...
	I0429 05:03:27.785598    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483b19c81"
	I0429 05:03:27.796671    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:03:27.796682    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:03:27.809293    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:03:27.809304    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:03:27.820701    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:03:27.820711    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:03:27.838672    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:03:27.838767    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:03:27.855094    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:03:27.855102    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:03:27.870559    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:03:27.870569    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:03:27.882770    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:03:27.882783    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:03:27.899854    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:03:27.899864    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:03:27.924465    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:03:27.924473    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:03:27.929282    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:03:27.929288    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:03:27.941516    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:03:27.941529    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:03:27.953103    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:03:27.953114    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:03:27.953140    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 05:03:27.953145    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:03:27.953148    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:03:27.953153    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:03:27.953156    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:03:37.957307    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:42.959678    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:03:42.959958    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:03:42.980296    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:03:42.980396    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:03:42.995927    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:03:42.996013    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:03:43.008893    8269 logs.go:276] 4 containers: [d67d6506457f 279483b19c81 8b104f5475d1 cf01071a108e]
	I0429 05:03:43.008969    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:03:43.020693    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:03:43.020766    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:03:43.031045    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:03:43.031112    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:03:43.041230    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:03:43.041307    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:03:43.051784    8269 logs.go:276] 0 containers: []
	W0429 05:03:43.051798    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:03:43.051861    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:03:43.066435    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:03:43.066450    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:03:43.066455    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:03:43.083225    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:03:43.083321    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:03:43.099739    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:03:43.099749    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:43.135129    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:03:43.135140    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:03:43.150909    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:03:43.150920    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:03:43.162702    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:03:43.162712    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:03:43.174209    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:03:43.174220    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:03:43.188826    8269 logs.go:123] Gathering logs for coredns [d67d6506457f] ...
	I0429 05:03:43.188838    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67d6506457f"
	I0429 05:03:43.201009    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:03:43.201018    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:03:43.219050    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:03:43.219059    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:03:43.243267    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:03:43.243274    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:03:43.255036    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:03:43.255048    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:03:43.266604    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:03:43.266615    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:03:43.278112    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:03:43.278124    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:03:43.282673    8269 logs.go:123] Gathering logs for coredns [279483b19c81] ...
	I0429 05:03:43.282682    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483b19c81"
	I0429 05:03:43.295238    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:03:43.295248    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:03:43.342576    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:03:43.342586    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:03:43.342611    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 05:03:43.342615    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:03:43.342626    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:03:43.342633    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:03:43.342638    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:03:53.346870    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:58.349286    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:03:58.349618    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:03:58.383112    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:03:58.383242    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:03:58.404066    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:03:58.404165    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:03:58.425213    8269 logs.go:276] 4 containers: [d67d6506457f 279483b19c81 8b104f5475d1 cf01071a108e]
	I0429 05:03:58.425292    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:03:58.442497    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:03:58.442568    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:03:58.453639    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:03:58.453709    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:03:58.464951    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:03:58.465018    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:03:58.475933    8269 logs.go:276] 0 containers: []
	W0429 05:03:58.475947    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:03:58.476010    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:03:58.499173    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:03:58.499196    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:03:58.499202    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:03:58.518382    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:03:58.518395    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:03:58.543456    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:03:58.543467    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:03:58.561631    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:03:58.561724    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:03:58.578100    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:03:58.578107    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:03:58.582259    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:03:58.582265    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:03:58.596279    8269 logs.go:123] Gathering logs for coredns [d67d6506457f] ...
	I0429 05:03:58.596292    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67d6506457f"
	I0429 05:03:58.607981    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:03:58.607994    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:03:58.635719    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:03:58.635730    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:03:58.650998    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:03:58.651008    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:03:58.662844    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:03:58.662853    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:58.700216    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:03:58.700229    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:03:58.714479    8269 logs.go:123] Gathering logs for coredns [279483b19c81] ...
	I0429 05:03:58.714490    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483b19c81"
	I0429 05:03:58.726335    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:03:58.726348    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:03:58.738330    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:03:58.738341    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:03:58.750187    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:03:58.750199    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:03:58.763722    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:03:58.763732    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:03:58.763758    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 05:03:58.763763    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:03:58.763767    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:03:58.763772    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:03:58.763775    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:04:08.767904    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:13.770190    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:13.770358    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:04:13.784844    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:04:13.784906    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:04:13.797069    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:04:13.797161    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:04:13.808399    8269 logs.go:276] 4 containers: [d67d6506457f 279483b19c81 8b104f5475d1 cf01071a108e]
	I0429 05:04:13.808476    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:04:13.818814    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:04:13.818882    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:04:13.829467    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:04:13.829532    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:04:13.840044    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:04:13.840110    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:04:13.852885    8269 logs.go:276] 0 containers: []
	W0429 05:04:13.852898    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:04:13.852947    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:04:13.863683    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:04:13.863701    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:04:13.863706    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:04:13.868592    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:04:13.868600    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:04:13.882710    8269 logs.go:123] Gathering logs for coredns [279483b19c81] ...
	I0429 05:04:13.882720    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483b19c81"
	I0429 05:04:13.894048    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:04:13.894064    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:04:13.905961    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:04:13.905971    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:04:13.923669    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:04:13.923762    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:04:13.940212    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:04:13.940219    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:04:13.976134    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:04:13.976148    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:04:13.998968    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:04:13.998979    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:04:14.013661    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:04:14.013676    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:04:14.031998    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:04:14.032008    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:04:14.043150    8269 logs.go:123] Gathering logs for coredns [d67d6506457f] ...
	I0429 05:04:14.043160    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67d6506457f"
	I0429 05:04:14.054649    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:04:14.054658    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:04:14.065866    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:04:14.065879    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:04:14.077136    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:04:14.077149    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:04:14.088771    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:04:14.088782    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:04:14.112535    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:04:14.112545    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:04:14.112569    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 05:04:14.112573    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:04:14.112577    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:04:14.112581    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:04:14.112584    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:04:24.116734    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:29.118755    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:29.118963    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:04:29.137587    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:04:29.137674    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:04:29.150829    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:04:29.150904    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:04:29.162295    8269 logs.go:276] 4 containers: [d67d6506457f 279483b19c81 8b104f5475d1 cf01071a108e]
	I0429 05:04:29.162370    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:04:29.172546    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:04:29.172614    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:04:29.182984    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:04:29.183053    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:04:29.193429    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:04:29.193490    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:04:29.203860    8269 logs.go:276] 0 containers: []
	W0429 05:04:29.203871    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:04:29.203930    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:04:29.214168    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:04:29.214184    8269 logs.go:123] Gathering logs for coredns [d67d6506457f] ...
	I0429 05:04:29.214189    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67d6506457f"
	I0429 05:04:29.225798    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:04:29.225809    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:04:29.237280    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:04:29.237291    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:04:29.260909    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:04:29.260922    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:04:29.278002    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:04:29.278096    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:04:29.294297    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:04:29.294304    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:04:29.330180    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:04:29.330192    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:04:29.342035    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:04:29.342046    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:04:29.366473    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:04:29.366484    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:04:29.379705    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:04:29.379716    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:04:29.393974    8269 logs.go:123] Gathering logs for coredns [279483b19c81] ...
	I0429 05:04:29.393986    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483b19c81"
	I0429 05:04:29.405481    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:04:29.405496    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:04:29.417643    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:04:29.417655    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:04:29.430197    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:04:29.430205    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:04:29.434833    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:04:29.434843    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:04:29.450760    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:04:29.450774    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:04:29.465944    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:04:29.465954    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:04:29.465980    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 05:04:29.465984    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:04:29.465988    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:04:29.465992    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:04:29.465994    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:04:39.466644    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:44.468962    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:44.469240    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:04:44.497181    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:04:44.497301    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:04:44.516425    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:04:44.516503    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:04:44.529098    8269 logs.go:276] 4 containers: [d67d6506457f 279483b19c81 8b104f5475d1 cf01071a108e]
	I0429 05:04:44.529182    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:04:44.540554    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:04:44.540617    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:04:44.551032    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:04:44.551096    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:04:44.560936    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:04:44.561007    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:04:44.571141    8269 logs.go:276] 0 containers: []
	W0429 05:04:44.571151    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:04:44.571207    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:04:44.581165    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:04:44.581185    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:04:44.581190    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:04:44.596428    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:04:44.596439    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:04:44.601175    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:04:44.601185    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:04:44.644077    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:04:44.644090    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:04:44.656576    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:04:44.656588    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:04:44.667986    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:04:44.667995    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:04:44.682992    8269 logs.go:123] Gathering logs for coredns [279483b19c81] ...
	I0429 05:04:44.683005    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483b19c81"
	I0429 05:04:44.694150    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:04:44.694161    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:04:44.712520    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:04:44.712624    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:04:44.729678    8269 logs.go:123] Gathering logs for coredns [d67d6506457f] ...
	I0429 05:04:44.729692    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67d6506457f"
	I0429 05:04:44.743040    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:04:44.743054    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:04:44.760045    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:04:44.760059    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:04:44.771910    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:04:44.771920    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:04:44.796280    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:04:44.796287    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:04:44.808629    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:04:44.808641    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:04:44.822790    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:04:44.822801    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:04:44.834870    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:04:44.834880    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:04:44.834908    8269 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 05:04:44.834913    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:04:44.834916    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	  Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:04:44.834920    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:04:44.834922    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:04:54.837922    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:59.840341    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:59.844943    8269 out.go:177] 
	W0429 05:04:59.850789    8269 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0429 05:04:59.850799    8269 out.go:239] * 
	* 
	W0429 05:04:59.851414    8269 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:04:59.862866    8269 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-310000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-04-29 05:04:59.923578 -0700 PDT m=+1272.530477501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-310000 -n running-upgrade-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-310000 -n running-upgrade-310000: exit status 2 (15.773349s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-310000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-163000          | force-systemd-flag-163000 | jenkins | v1.33.0 | 29 Apr 24 04:55 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-236000              | force-systemd-env-236000  | jenkins | v1.33.0 | 29 Apr 24 04:55 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-236000           | force-systemd-env-236000  | jenkins | v1.33.0 | 29 Apr 24 04:55 PDT | 29 Apr 24 04:55 PDT |
	| start   | -p docker-flags-285000                | docker-flags-285000       | jenkins | v1.33.0 | 29 Apr 24 04:55 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-163000             | force-systemd-flag-163000 | jenkins | v1.33.0 | 29 Apr 24 04:55 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-163000          | force-systemd-flag-163000 | jenkins | v1.33.0 | 29 Apr 24 04:55 PDT | 29 Apr 24 04:55 PDT |
	| start   | -p cert-expiration-508000             | cert-expiration-508000    | jenkins | v1.33.0 | 29 Apr 24 04:55 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-285000 ssh               | docker-flags-285000       | jenkins | v1.33.0 | 29 Apr 24 04:55 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-285000 ssh               | docker-flags-285000       | jenkins | v1.33.0 | 29 Apr 24 04:55 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-285000                | docker-flags-285000       | jenkins | v1.33.0 | 29 Apr 24 04:55 PDT | 29 Apr 24 04:55 PDT |
	| start   | -p cert-options-495000                | cert-options-495000       | jenkins | v1.33.0 | 29 Apr 24 04:55 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-495000 ssh               | cert-options-495000       | jenkins | v1.33.0 | 29 Apr 24 04:55 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-495000 -- sudo        | cert-options-495000       | jenkins | v1.33.0 | 29 Apr 24 04:55 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-495000                | cert-options-495000       | jenkins | v1.33.0 | 29 Apr 24 04:55 PDT | 29 Apr 24 04:55 PDT |
	| start   | -p running-upgrade-310000             | minikube                  | jenkins | v1.26.0 | 29 Apr 24 04:55 PDT | 29 Apr 24 04:56 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-310000             | running-upgrade-310000    | jenkins | v1.33.0 | 29 Apr 24 04:56 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-508000             | cert-expiration-508000    | jenkins | v1.33.0 | 29 Apr 24 04:58 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-508000             | cert-expiration-508000    | jenkins | v1.33.0 | 29 Apr 24 04:58 PDT | 29 Apr 24 04:58 PDT |
	| start   | -p kubernetes-upgrade-894000          | kubernetes-upgrade-894000 | jenkins | v1.33.0 | 29 Apr 24 04:58 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-894000          | kubernetes-upgrade-894000 | jenkins | v1.33.0 | 29 Apr 24 04:58 PDT | 29 Apr 24 04:58 PDT |
	| start   | -p kubernetes-upgrade-894000          | kubernetes-upgrade-894000 | jenkins | v1.33.0 | 29 Apr 24 04:58 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-894000          | kubernetes-upgrade-894000 | jenkins | v1.33.0 | 29 Apr 24 04:58 PDT | 29 Apr 24 04:58 PDT |
	| start   | -p stopped-upgrade-383000             | minikube                  | jenkins | v1.26.0 | 29 Apr 24 04:58 PDT | 29 Apr 24 04:59 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-383000 stop           | minikube                  | jenkins | v1.26.0 | 29 Apr 24 04:59 PDT | 29 Apr 24 04:59 PDT |
	| start   | -p stopped-upgrade-383000             | stopped-upgrade-383000    | jenkins | v1.33.0 | 29 Apr 24 04:59 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 04:59:44
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 04:59:44.210421    8430 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:59:44.210591    8430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:59:44.210595    8430 out.go:304] Setting ErrFile to fd 2...
	I0429 04:59:44.210597    8430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:59:44.210755    8430 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:59:44.211924    8430 out.go:298] Setting JSON to false
	I0429 04:59:44.230312    8430 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5355,"bootTime":1714386629,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:59:44.230388    8430 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:59:44.234644    8430 out.go:177] * [stopped-upgrade-383000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:59:44.242551    8430 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 04:59:44.242627    8430 notify.go:220] Checking for updates...
	I0429 04:59:44.249486    8430 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:59:44.257483    8430 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:59:44.260526    8430 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:59:44.264489    8430 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 04:59:44.267527    8430 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:59:44.270745    8430 config.go:182] Loaded profile config "stopped-upgrade-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 04:59:44.273475    8430 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0429 04:59:44.276473    8430 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:59:44.280531    8430 out.go:177] * Using the qemu2 driver based on existing profile
	I0429 04:59:44.287456    8430 start.go:297] selected driver: qemu2
	I0429 04:59:44.287464    8430 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-383000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51384 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-383000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0429 04:59:44.287516    8430 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:59:44.290219    8430 cni.go:84] Creating CNI manager for ""
	I0429 04:59:44.290239    8430 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 04:59:44.290270    8430 start.go:340] cluster config:
	{Name:stopped-upgrade-383000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51384 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-383000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0429 04:59:44.290332    8430 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:59:44.298350    8430 out.go:177] * Starting "stopped-upgrade-383000" primary control-plane node in "stopped-upgrade-383000" cluster
	I0429 04:59:44.302503    8430 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0429 04:59:44.302524    8430 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0429 04:59:44.302535    8430 cache.go:56] Caching tarball of preloaded images
	I0429 04:59:44.302603    8430 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 04:59:44.302609    8430 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0429 04:59:44.302660    8430 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/config.json ...
	I0429 04:59:44.303166    8430 start.go:360] acquireMachinesLock for stopped-upgrade-383000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:59:44.303202    8430 start.go:364] duration metric: took 29.958µs to acquireMachinesLock for "stopped-upgrade-383000"
	I0429 04:59:44.303212    8430 start.go:96] Skipping create...Using existing machine configuration
	I0429 04:59:44.303217    8430 fix.go:54] fixHost starting: 
	I0429 04:59:44.303328    8430 fix.go:112] recreateIfNeeded on stopped-upgrade-383000: state=Stopped err=<nil>
	W0429 04:59:44.303343    8430 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 04:59:44.311490    8430 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-383000" ...
	I0429 04:59:45.833875    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 04:59:45.834293    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 04:59:45.874907    8269 logs.go:276] 2 containers: [20e5fdcd56e8 fa1182944783]
	I0429 04:59:45.875048    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 04:59:45.899331    8269 logs.go:276] 2 containers: [2ab3e4df63bd d445bd598284]
	I0429 04:59:45.899428    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 04:59:45.914201    8269 logs.go:276] 1 containers: [bad32a53115b]
	I0429 04:59:45.914280    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 04:59:45.928091    8269 logs.go:276] 2 containers: [9859f21707de cbd2ba51cafa]
	I0429 04:59:45.928167    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 04:59:45.938897    8269 logs.go:276] 1 containers: [1cab5e65caa3]
	I0429 04:59:45.938964    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 04:59:45.949588    8269 logs.go:276] 2 containers: [951503dd4353 337d2bfa0452]
	I0429 04:59:45.949658    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 04:59:45.960200    8269 logs.go:276] 0 containers: []
	W0429 04:59:45.960211    8269 logs.go:278] No container was found matching "kindnet"
	I0429 04:59:45.960262    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 04:59:45.970888    8269 logs.go:276] 2 containers: [610ab484c7e0 164f3ba6e510]
	I0429 04:59:45.970906    8269 logs.go:123] Gathering logs for storage-provisioner [610ab484c7e0] ...
	I0429 04:59:45.970912    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 610ab484c7e0"
	I0429 04:59:45.984053    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 04:59:45.984065    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 04:59:46.017899    8269 logs.go:123] Gathering logs for kube-apiserver [20e5fdcd56e8] ...
	I0429 04:59:46.017911    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e5fdcd56e8"
	I0429 04:59:46.033507    8269 logs.go:123] Gathering logs for etcd [2ab3e4df63bd] ...
	I0429 04:59:46.033518    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ab3e4df63bd"
	I0429 04:59:46.048042    8269 logs.go:123] Gathering logs for kube-scheduler [9859f21707de] ...
	I0429 04:59:46.048052    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9859f21707de"
	I0429 04:59:46.059849    8269 logs.go:123] Gathering logs for kube-controller-manager [951503dd4353] ...
	I0429 04:59:46.059860    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 951503dd4353"
	I0429 04:59:46.077133    8269 logs.go:123] Gathering logs for container status ...
	I0429 04:59:46.077144    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 04:59:46.089133    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 04:59:46.089147    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 04:59:46.127039    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 04:59:46.127143    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 04:59:46.127884    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 04:59:46.127888    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 04:59:46.133381    8269 logs.go:123] Gathering logs for kube-apiserver [fa1182944783] ...
	I0429 04:59:46.133390    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1182944783"
	I0429 04:59:46.153045    8269 logs.go:123] Gathering logs for kube-scheduler [cbd2ba51cafa] ...
	I0429 04:59:46.153055    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbd2ba51cafa"
	I0429 04:59:46.168332    8269 logs.go:123] Gathering logs for kube-proxy [1cab5e65caa3] ...
	I0429 04:59:46.168345    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cab5e65caa3"
	I0429 04:59:46.184386    8269 logs.go:123] Gathering logs for Docker ...
	I0429 04:59:46.184399    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 04:59:46.207162    8269 logs.go:123] Gathering logs for etcd [d445bd598284] ...
	I0429 04:59:46.207169    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d445bd598284"
	I0429 04:59:46.221034    8269 logs.go:123] Gathering logs for coredns [bad32a53115b] ...
	I0429 04:59:46.221045    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bad32a53115b"
	I0429 04:59:46.233488    8269 logs.go:123] Gathering logs for kube-controller-manager [337d2bfa0452] ...
	I0429 04:59:46.233500    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 337d2bfa0452"
	I0429 04:59:46.244749    8269 logs.go:123] Gathering logs for storage-provisioner [164f3ba6e510] ...
	I0429 04:59:46.244761    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 164f3ba6e510"
	I0429 04:59:46.256032    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 04:59:46.256044    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 04:59:46.256077    8269 out.go:239] X Problems detected in kubelet:
	W0429 04:59:46.256082    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 04:59:46.256085    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 04:59:46.256089    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 04:59:46.256091    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:59:44.315540    8430 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/stopped-upgrade-383000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/stopped-upgrade-383000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/stopped-upgrade-383000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51351-:22,hostfwd=tcp::51352-:2376,hostname=stopped-upgrade-383000 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/stopped-upgrade-383000/disk.qcow2
	I0429 04:59:44.363395    8430 main.go:141] libmachine: STDOUT: 
	I0429 04:59:44.363447    8430 main.go:141] libmachine: STDERR: 
	I0429 04:59:44.363452    8430 main.go:141] libmachine: Waiting for VM to start (ssh -p 51351 docker@127.0.0.1)...
	I0429 04:59:56.258520    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:01.260767    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:01.260986    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:00:01.275935    8269 logs.go:276] 2 containers: [20e5fdcd56e8 fa1182944783]
	I0429 05:00:01.276020    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:00:01.287256    8269 logs.go:276] 2 containers: [2ab3e4df63bd d445bd598284]
	I0429 05:00:01.287327    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:00:01.297259    8269 logs.go:276] 1 containers: [bad32a53115b]
	I0429 05:00:01.297329    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:00:01.307446    8269 logs.go:276] 2 containers: [9859f21707de cbd2ba51cafa]
	I0429 05:00:01.307516    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:00:01.322686    8269 logs.go:276] 1 containers: [1cab5e65caa3]
	I0429 05:00:01.322756    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:00:01.333263    8269 logs.go:276] 2 containers: [951503dd4353 337d2bfa0452]
	I0429 05:00:01.333336    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:00:01.344490    8269 logs.go:276] 0 containers: []
	W0429 05:00:01.344507    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:00:01.344566    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:00:01.355541    8269 logs.go:276] 2 containers: [610ab484c7e0 164f3ba6e510]
	I0429 05:00:01.355560    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:00:01.355567    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:00:01.360183    8269 logs.go:123] Gathering logs for kube-scheduler [cbd2ba51cafa] ...
	I0429 05:00:01.360190    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbd2ba51cafa"
	I0429 05:00:01.374975    8269 logs.go:123] Gathering logs for kube-controller-manager [337d2bfa0452] ...
	I0429 05:00:01.374986    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 337d2bfa0452"
	I0429 05:00:01.386125    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:00:01.386136    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:00:01.421351    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:00:01.421442    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:00:01.422148    8269 logs.go:123] Gathering logs for kube-apiserver [fa1182944783] ...
	I0429 05:00:01.422151    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1182944783"
	I0429 05:00:01.441603    8269 logs.go:123] Gathering logs for kube-proxy [1cab5e65caa3] ...
	I0429 05:00:01.441616    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cab5e65caa3"
	I0429 05:00:01.452988    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:00:01.453000    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:00:01.477331    8269 logs.go:123] Gathering logs for kube-apiserver [20e5fdcd56e8] ...
	I0429 05:00:01.477337    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e5fdcd56e8"
	I0429 05:00:01.491158    8269 logs.go:123] Gathering logs for etcd [d445bd598284] ...
	I0429 05:00:01.491170    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d445bd598284"
	I0429 05:00:01.506458    8269 logs.go:123] Gathering logs for kube-scheduler [9859f21707de] ...
	I0429 05:00:01.506469    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9859f21707de"
	I0429 05:00:01.522288    8269 logs.go:123] Gathering logs for kube-controller-manager [951503dd4353] ...
	I0429 05:00:01.522298    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 951503dd4353"
	I0429 05:00:01.539077    8269 logs.go:123] Gathering logs for storage-provisioner [610ab484c7e0] ...
	I0429 05:00:01.539085    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 610ab484c7e0"
	I0429 05:00:01.550790    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:00:01.550801    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:00:01.562363    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:00:01.562373    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:00:01.596196    8269 logs.go:123] Gathering logs for etcd [2ab3e4df63bd] ...
	I0429 05:00:01.596205    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ab3e4df63bd"
	I0429 05:00:01.610564    8269 logs.go:123] Gathering logs for coredns [bad32a53115b] ...
	I0429 05:00:01.610572    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bad32a53115b"
	I0429 05:00:01.621612    8269 logs.go:123] Gathering logs for storage-provisioner [164f3ba6e510] ...
	I0429 05:00:01.621623    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 164f3ba6e510"
	I0429 05:00:01.632820    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:00:01.632830    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:00:01.632857    8269 out.go:239] X Problems detected in kubelet:
	W0429 05:00:01.632861    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:00:01.632864    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:00:01.632871    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:00:01.632930    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:00:04.218412    8430 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/config.json ...
	I0429 05:00:04.219456    8430 machine.go:94] provisionDockerMachine start ...
	I0429 05:00:04.219581    8430 main.go:141] libmachine: Using SSH client type: native
	I0429 05:00:04.219928    8430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004bdc80] 0x1004c04e0 <nil>  [] 0s} localhost 51351 <nil> <nil>}
	I0429 05:00:04.219941    8430 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 05:00:04.296106    8430 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 05:00:04.296122    8430 buildroot.go:166] provisioning hostname "stopped-upgrade-383000"
	I0429 05:00:04.296190    8430 main.go:141] libmachine: Using SSH client type: native
	I0429 05:00:04.296332    8430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004bdc80] 0x1004c04e0 <nil>  [] 0s} localhost 51351 <nil> <nil>}
	I0429 05:00:04.296339    8430 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-383000 && echo "stopped-upgrade-383000" | sudo tee /etc/hostname
	I0429 05:00:04.365595    8430 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-383000
	
	I0429 05:00:04.365664    8430 main.go:141] libmachine: Using SSH client type: native
	I0429 05:00:04.365804    8430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004bdc80] 0x1004c04e0 <nil>  [] 0s} localhost 51351 <nil> <nil>}
	I0429 05:00:04.365813    8430 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-383000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-383000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-383000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 05:00:04.431806    8430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 05:00:04.431822    8430 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18771-6092/.minikube CaCertPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18771-6092/.minikube}
	I0429 05:00:04.431830    8430 buildroot.go:174] setting up certificates
	I0429 05:00:04.431835    8430 provision.go:84] configureAuth start
	I0429 05:00:04.431840    8430 provision.go:143] copyHostCerts
	I0429 05:00:04.431925    8430 exec_runner.go:144] found /Users/jenkins/minikube-integration/18771-6092/.minikube/ca.pem, removing ...
	I0429 05:00:04.431933    8430 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18771-6092/.minikube/ca.pem
	I0429 05:00:04.432047    8430 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18771-6092/.minikube/ca.pem (1082 bytes)
	I0429 05:00:04.432246    8430 exec_runner.go:144] found /Users/jenkins/minikube-integration/18771-6092/.minikube/cert.pem, removing ...
	I0429 05:00:04.432251    8430 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18771-6092/.minikube/cert.pem
	I0429 05:00:04.432312    8430 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18771-6092/.minikube/cert.pem (1123 bytes)
	I0429 05:00:04.432442    8430 exec_runner.go:144] found /Users/jenkins/minikube-integration/18771-6092/.minikube/key.pem, removing ...
	I0429 05:00:04.432446    8430 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18771-6092/.minikube/key.pem
	I0429 05:00:04.432501    8430 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18771-6092/.minikube/key.pem (1679 bytes)
	I0429 05:00:04.432615    8430 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-383000 san=[127.0.0.1 localhost minikube stopped-upgrade-383000]
	I0429 05:00:04.542191    8430 provision.go:177] copyRemoteCerts
	I0429 05:00:04.542232    8430 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 05:00:04.542240    8430 sshutil.go:53] new ssh client: &{IP:localhost Port:51351 SSHKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/stopped-upgrade-383000/id_rsa Username:docker}
	I0429 05:00:04.574403    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0429 05:00:04.580951    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 05:00:04.587636    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 05:00:04.594984    8430 provision.go:87] duration metric: took 163.139417ms to configureAuth
	I0429 05:00:04.594993    8430 buildroot.go:189] setting minikube options for container-runtime
	I0429 05:00:04.595097    8430 config.go:182] Loaded profile config "stopped-upgrade-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 05:00:04.595136    8430 main.go:141] libmachine: Using SSH client type: native
	I0429 05:00:04.595223    8430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004bdc80] 0x1004c04e0 <nil>  [] 0s} localhost 51351 <nil> <nil>}
	I0429 05:00:04.595228    8430 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 05:00:04.657067    8430 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 05:00:04.657076    8430 buildroot.go:70] root file system type: tmpfs
	I0429 05:00:04.657131    8430 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 05:00:04.657177    8430 main.go:141] libmachine: Using SSH client type: native
	I0429 05:00:04.657288    8430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004bdc80] 0x1004c04e0 <nil>  [] 0s} localhost 51351 <nil> <nil>}
	I0429 05:00:04.657328    8430 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 05:00:04.720326    8430 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 05:00:04.720372    8430 main.go:141] libmachine: Using SSH client type: native
	I0429 05:00:04.720477    8430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004bdc80] 0x1004c04e0 <nil>  [] 0s} localhost 51351 <nil> <nil>}
	I0429 05:00:04.720485    8430 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 05:00:05.057917    8430 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 05:00:05.057932    8430 machine.go:97] duration metric: took 838.465709ms to provisionDockerMachine
	I0429 05:00:05.057939    8430 start.go:293] postStartSetup for "stopped-upgrade-383000" (driver="qemu2")
	I0429 05:00:05.057945    8430 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 05:00:05.058014    8430 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 05:00:05.058023    8430 sshutil.go:53] new ssh client: &{IP:localhost Port:51351 SSHKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/stopped-upgrade-383000/id_rsa Username:docker}
	I0429 05:00:05.090389    8430 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 05:00:05.091622    8430 info.go:137] Remote host: Buildroot 2021.02.12
	I0429 05:00:05.091630    8430 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18771-6092/.minikube/addons for local assets ...
	I0429 05:00:05.091707    8430 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18771-6092/.minikube/files for local assets ...
	I0429 05:00:05.091828    8430 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18771-6092/.minikube/files/etc/ssl/certs/65002.pem -> 65002.pem in /etc/ssl/certs
	I0429 05:00:05.091961    8430 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 05:00:05.094758    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/files/etc/ssl/certs/65002.pem --> /etc/ssl/certs/65002.pem (1708 bytes)
	I0429 05:00:05.101609    8430 start.go:296] duration metric: took 43.665125ms for postStartSetup
	I0429 05:00:05.101622    8430 fix.go:56] duration metric: took 20.798449s for fixHost
	I0429 05:00:05.101658    8430 main.go:141] libmachine: Using SSH client type: native
	I0429 05:00:05.102083    8430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004bdc80] 0x1004c04e0 <nil>  [] 0s} localhost 51351 <nil> <nil>}
	I0429 05:00:05.102104    8430 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 05:00:05.163755    8430 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714392004.751566671
	
	I0429 05:00:05.163765    8430 fix.go:216] guest clock: 1714392004.751566671
	I0429 05:00:05.163769    8430 fix.go:229] Guest: 2024-04-29 05:00:04.751566671 -0700 PDT Remote: 2024-04-29 05:00:05.101624 -0700 PDT m=+20.926633001 (delta=-350.057329ms)
	I0429 05:00:05.163781    8430 fix.go:200] guest clock delta is within tolerance: -350.057329ms
	I0429 05:00:05.163785    8430 start.go:83] releasing machines lock for "stopped-upgrade-383000", held for 20.860621666s
	I0429 05:00:05.163850    8430 ssh_runner.go:195] Run: cat /version.json
	I0429 05:00:05.163860    8430 sshutil.go:53] new ssh client: &{IP:localhost Port:51351 SSHKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/stopped-upgrade-383000/id_rsa Username:docker}
	I0429 05:00:05.163930    8430 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 05:00:05.163986    8430 sshutil.go:53] new ssh client: &{IP:localhost Port:51351 SSHKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/stopped-upgrade-383000/id_rsa Username:docker}
	W0429 05:00:05.164485    8430 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51470->127.0.0.1:51351: write: broken pipe
	I0429 05:00:05.164508    8430 retry.go:31] will retry after 200.187864ms: ssh: handshake failed: write tcp 127.0.0.1:51470->127.0.0.1:51351: write: broken pipe
	W0429 05:00:05.402188    8430 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0429 05:00:05.402278    8430 ssh_runner.go:195] Run: systemctl --version
	I0429 05:00:05.405112    8430 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 05:00:05.407011    8430 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 05:00:05.407042    8430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0429 05:00:05.410081    8430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0429 05:00:05.414960    8430 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 05:00:05.414969    8430 start.go:494] detecting cgroup driver to use...
	I0429 05:00:05.415046    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 05:00:05.424369    8430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0429 05:00:05.429531    8430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 05:00:05.433716    8430 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 05:00:05.433771    8430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 05:00:05.440294    8430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 05:00:05.443643    8430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 05:00:05.447055    8430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 05:00:05.450411    8430 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 05:00:05.453318    8430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 05:00:05.456102    8430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 05:00:05.459561    8430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 05:00:05.463081    8430 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 05:00:05.465852    8430 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 05:00:05.468425    8430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 05:00:05.529499    8430 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 05:00:05.540024    8430 start.go:494] detecting cgroup driver to use...
	I0429 05:00:05.540117    8430 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 05:00:05.545427    8430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 05:00:05.550475    8430 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 05:00:05.558453    8430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 05:00:05.562854    8430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 05:00:05.567396    8430 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 05:00:05.613893    8430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 05:00:05.618471    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 05:00:05.624092    8430 ssh_runner.go:195] Run: which cri-dockerd
	I0429 05:00:05.625561    8430 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 05:00:05.628173    8430 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 05:00:05.633340    8430 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 05:00:05.693607    8430 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 05:00:05.769470    8430 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 05:00:05.769532    8430 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 05:00:05.774527    8430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 05:00:05.833369    8430 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 05:00:06.981190    8430 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.14780725s)
	I0429 05:00:06.981257    8430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 05:00:06.985808    8430 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0429 05:00:06.991544    8430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 05:00:06.995859    8430 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 05:00:07.073617    8430 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 05:00:07.133482    8430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 05:00:07.195265    8430 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 05:00:07.201537    8430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 05:00:07.206020    8430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 05:00:07.264643    8430 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 05:00:07.303664    8430 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 05:00:07.303748    8430 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 05:00:07.306171    8430 start.go:562] Will wait 60s for crictl version
	I0429 05:00:07.306242    8430 ssh_runner.go:195] Run: which crictl
	I0429 05:00:07.307972    8430 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 05:00:07.323718    8430 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0429 05:00:07.323788    8430 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 05:00:07.340696    8430 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 05:00:07.361441    8430 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0429 05:00:07.361561    8430 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0429 05:00:07.362735    8430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 05:00:07.366457    8430 kubeadm.go:877] updating cluster {Name:stopped-upgrade-383000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51384 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-383000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0429 05:00:07.366503    8430 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0429 05:00:07.366546    8430 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 05:00:07.376997    8430 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 05:00:07.377012    8430 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0429 05:00:07.377062    8430 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 05:00:07.380032    8430 ssh_runner.go:195] Run: which lz4
	I0429 05:00:07.381368    8430 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 05:00:07.382609    8430 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 05:00:07.382618    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0429 05:00:08.083093    8430 docker.go:649] duration metric: took 701.758667ms to copy over tarball
	I0429 05:00:08.083173    8430 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 05:00:11.637022    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:09.274978    8430 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.191792291s)
	I0429 05:00:09.274993    8430 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 05:00:09.290720    8430 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 05:00:09.294017    8430 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0429 05:00:09.298944    8430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 05:00:09.364728    8430 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 05:00:11.022646    8430 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.657901167s)
	I0429 05:00:11.022738    8430 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 05:00:11.035239    8430 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 05:00:11.035251    8430 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0429 05:00:11.035256    8430 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 05:00:11.041773    8430 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0429 05:00:11.041806    8430 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0429 05:00:11.041888    8430 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 05:00:11.041990    8430 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0429 05:00:11.042074    8430 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0429 05:00:11.042196    8430 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0429 05:00:11.042243    8430 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0429 05:00:11.043018    8430 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0429 05:00:11.051685    8430 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0429 05:00:11.051750    8430 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0429 05:00:11.051819    8430 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 05:00:11.052252    8430 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0429 05:00:11.052573    8430 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0429 05:00:11.052603    8430 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0429 05:00:11.052662    8430 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0429 05:00:11.052649    8430 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	W0429 05:00:11.841500    8430 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0429 05:00:11.841908    8430 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 05:00:11.872797    8430 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0429 05:00:11.872845    8430 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 05:00:11.872947    8430 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 05:00:11.896880    8430 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0429 05:00:11.897015    8430 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0429 05:00:11.899064    8430 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0429 05:00:11.899079    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0429 05:00:11.925667    8430 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0429 05:00:11.925680    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0429 05:00:12.163001    8430 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0429 05:00:13.230913    8430 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0429 05:00:13.256335    8430 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0429 05:00:13.256370    8430 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0429 05:00:13.256460    8430 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0429 05:00:13.273792    8430 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0429 05:00:13.338832    8430 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0429 05:00:13.353868    8430 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0429 05:00:13.353895    8430 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0429 05:00:13.353960    8430 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0429 05:00:13.366317    8430 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0429 05:00:13.377563    8430 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0429 05:00:13.378648    8430 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0429 05:00:13.394876    8430 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0429 05:00:13.394900    8430 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0429 05:00:13.394905    8430 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0429 05:00:13.394916    8430 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0429 05:00:13.394964    8430 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0429 05:00:13.394964    8430 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0429 05:00:13.405429    8430 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0429 05:00:13.405563    8430 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0429 05:00:13.406561    8430 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0429 05:00:13.407514    8430 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0429 05:00:13.407527    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0429 05:00:13.415149    8430 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0429 05:00:13.415158    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0429 05:00:13.441939    8430 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0429 05:00:13.947068    8430 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	W0429 05:00:13.958767    8430 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0429 05:00:13.959209    8430 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0429 05:00:14.000932    8430 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0429 05:00:14.000967    8430 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0429 05:00:14.001004    8430 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0429 05:00:14.001032    8430 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0429 05:00:14.001060    8430 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0429 05:00:14.001075    8430 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0429 05:00:14.008724    8430 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0429 05:00:14.030773    8430 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0429 05:00:14.030779    8430 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0429 05:00:14.030916    8430 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0429 05:00:14.035132    8430 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0429 05:00:14.035152    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0429 05:00:14.035261    8430 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0429 05:00:14.035278    8430 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0429 05:00:14.035324    8430 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0429 05:00:14.062692    8430 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0429 05:00:14.073529    8430 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0429 05:00:14.073543    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0429 05:00:14.109032    8430 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0429 05:00:14.109072    8430 cache_images.go:92] duration metric: took 3.0738165s to LoadCachedImages
	W0429 05:00:14.109116    8430 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	I0429 05:00:14.109122    8430 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0429 05:00:14.109179    8430 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-383000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-383000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 05:00:14.109242    8430 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 05:00:14.122370    8430 cni.go:84] Creating CNI manager for ""
	I0429 05:00:14.122384    8430 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 05:00:14.122389    8430 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 05:00:14.122397    8430 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-383000 NodeName:stopped-upgrade-383000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 05:00:14.122470    8430 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-383000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 05:00:14.122528    8430 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0429 05:00:14.125612    8430 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 05:00:14.125640    8430 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 05:00:14.128296    8430 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0429 05:00:14.133306    8430 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 05:00:14.138038    8430 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0429 05:00:14.143557    8430 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0429 05:00:14.144779    8430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 05:00:14.148078    8430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 05:00:16.639258    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:16.639467    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:00:16.651170    8269 logs.go:276] 2 containers: [20e5fdcd56e8 fa1182944783]
	I0429 05:00:16.651248    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:00:16.667290    8269 logs.go:276] 2 containers: [2ab3e4df63bd d445bd598284]
	I0429 05:00:16.667364    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:00:16.678367    8269 logs.go:276] 1 containers: [bad32a53115b]
	I0429 05:00:16.678441    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:00:16.688956    8269 logs.go:276] 2 containers: [9859f21707de cbd2ba51cafa]
	I0429 05:00:16.689027    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:00:16.699692    8269 logs.go:276] 1 containers: [1cab5e65caa3]
	I0429 05:00:16.699763    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:00:16.715073    8269 logs.go:276] 2 containers: [951503dd4353 337d2bfa0452]
	I0429 05:00:16.715146    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:00:16.725770    8269 logs.go:276] 0 containers: []
	W0429 05:00:16.725782    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:00:16.725843    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:00:16.739862    8269 logs.go:276] 2 containers: [610ab484c7e0 164f3ba6e510]
	I0429 05:00:16.739882    8269 logs.go:123] Gathering logs for kube-apiserver [20e5fdcd56e8] ...
	I0429 05:00:16.739888    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e5fdcd56e8"
	I0429 05:00:16.754651    8269 logs.go:123] Gathering logs for kube-scheduler [9859f21707de] ...
	I0429 05:00:16.754660    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9859f21707de"
	I0429 05:00:16.766338    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:00:16.766350    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:00:16.783390    8269 logs.go:123] Gathering logs for kube-controller-manager [337d2bfa0452] ...
	I0429 05:00:16.783400    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 337d2bfa0452"
	I0429 05:00:16.795034    8269 logs.go:123] Gathering logs for storage-provisioner [610ab484c7e0] ...
	I0429 05:00:16.795046    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 610ab484c7e0"
	I0429 05:00:16.807289    8269 logs.go:123] Gathering logs for kube-proxy [1cab5e65caa3] ...
	I0429 05:00:16.807300    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cab5e65caa3"
	I0429 05:00:16.819348    8269 logs.go:123] Gathering logs for kube-controller-manager [951503dd4353] ...
	I0429 05:00:16.819358    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 951503dd4353"
	I0429 05:00:16.843573    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:00:16.843585    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:00:16.881215    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:00:16.881310    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:00:16.882070    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:00:16.882078    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:00:16.917662    8269 logs.go:123] Gathering logs for etcd [2ab3e4df63bd] ...
	I0429 05:00:16.917672    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ab3e4df63bd"
	I0429 05:00:16.931507    8269 logs.go:123] Gathering logs for etcd [d445bd598284] ...
	I0429 05:00:16.931518    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d445bd598284"
	I0429 05:00:16.945537    8269 logs.go:123] Gathering logs for coredns [bad32a53115b] ...
	I0429 05:00:16.945548    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bad32a53115b"
	I0429 05:00:16.957688    8269 logs.go:123] Gathering logs for kube-scheduler [cbd2ba51cafa] ...
	I0429 05:00:16.957699    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbd2ba51cafa"
	I0429 05:00:16.973576    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:00:16.973586    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:00:16.978104    8269 logs.go:123] Gathering logs for kube-apiserver [fa1182944783] ...
	I0429 05:00:16.978115    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1182944783"
	I0429 05:00:16.997987    8269 logs.go:123] Gathering logs for storage-provisioner [164f3ba6e510] ...
	I0429 05:00:16.997997    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 164f3ba6e510"
	I0429 05:00:17.009117    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:00:17.009128    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:00:17.032881    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:00:17.032889    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:00:17.032912    8269 out.go:239] X Problems detected in kubelet:
	W0429 05:00:17.032916    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:00:17.032920    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:00:17.032925    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:00:17.032927    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:00:14.213907    8430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 05:00:14.225431    8430 certs.go:68] Setting up /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000 for IP: 10.0.2.15
	I0429 05:00:14.225441    8430 certs.go:194] generating shared ca certs ...
	I0429 05:00:14.225449    8430 certs.go:226] acquiring lock for ca certs: {Name:mk6c1fe0c368234e15356f74a5a8907d9d0bc3ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:00:14.225622    8430 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/ca.key
	I0429 05:00:14.225844    8430 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/proxy-client-ca.key
	I0429 05:00:14.225851    8430 certs.go:256] generating profile certs ...
	I0429 05:00:14.226042    8430 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/client.key
	I0429 05:00:14.226078    8430 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.key.fc72e758
	I0429 05:00:14.226091    8430 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.crt.fc72e758 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0429 05:00:14.349165    8430 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.crt.fc72e758 ...
	I0429 05:00:14.349181    8430 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.crt.fc72e758: {Name:mk90f388eda2edfb8de5b5afa7533ff52d4f49e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:00:14.349502    8430 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.key.fc72e758 ...
	I0429 05:00:14.349506    8430 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.key.fc72e758: {Name:mkf949702a83e58fb4b946f45ffcc95bbbfbdaa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:00:14.349645    8430 certs.go:381] copying /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.crt.fc72e758 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.crt
	I0429 05:00:14.349782    8430 certs.go:385] copying /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.key.fc72e758 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.key
	I0429 05:00:14.350180    8430 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/proxy-client.key
	I0429 05:00:14.350351    8430 certs.go:484] found cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/6500.pem (1338 bytes)
	W0429 05:00:14.350555    8430 certs.go:480] ignoring /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/6500_empty.pem, impossibly tiny 0 bytes
	I0429 05:00:14.350560    8430 certs.go:484] found cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 05:00:14.350590    8430 certs.go:484] found cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem (1082 bytes)
	I0429 05:00:14.350616    8430 certs.go:484] found cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem (1123 bytes)
	I0429 05:00:14.350635    8430 certs.go:484] found cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/key.pem (1679 bytes)
	I0429 05:00:14.350676    8430 certs.go:484] found cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/files/etc/ssl/certs/65002.pem (1708 bytes)
	I0429 05:00:14.351012    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 05:00:14.357861    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 05:00:14.364654    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 05:00:14.371730    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 05:00:14.378547    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0429 05:00:14.385065    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 05:00:14.391362    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 05:00:14.398059    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 05:00:14.404314    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 05:00:14.410969    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/6500.pem --> /usr/share/ca-certificates/6500.pem (1338 bytes)
	I0429 05:00:14.418064    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/files/etc/ssl/certs/65002.pem --> /usr/share/ca-certificates/65002.pem (1708 bytes)
	I0429 05:00:14.424250    8430 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 05:00:14.429665    8430 ssh_runner.go:195] Run: openssl version
	I0429 05:00:14.431425    8430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 05:00:14.434452    8430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 05:00:14.435753    8430 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I0429 05:00:14.435773    8430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 05:00:14.437508    8430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 05:00:14.440323    8430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6500.pem && ln -fs /usr/share/ca-certificates/6500.pem /etc/ssl/certs/6500.pem"
	I0429 05:00:14.443405    8430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6500.pem
	I0429 05:00:14.444876    8430 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 11:44 /usr/share/ca-certificates/6500.pem
	I0429 05:00:14.444902    8430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6500.pem
	I0429 05:00:14.446721    8430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6500.pem /etc/ssl/certs/51391683.0"
	I0429 05:00:14.449552    8430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65002.pem && ln -fs /usr/share/ca-certificates/65002.pem /etc/ssl/certs/65002.pem"
	I0429 05:00:14.452314    8430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65002.pem
	I0429 05:00:14.453665    8430 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 11:44 /usr/share/ca-certificates/65002.pem
	I0429 05:00:14.453685    8430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65002.pem
	I0429 05:00:14.455411    8430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65002.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 05:00:14.458662    8430 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 05:00:14.460138    8430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 05:00:14.462606    8430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 05:00:14.464347    8430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 05:00:14.466153    8430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 05:00:14.467888    8430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 05:00:14.469623    8430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 05:00:14.471351    8430 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-383000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51384 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-383000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0429 05:00:14.471417    8430 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 05:00:14.481596    8430 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 05:00:14.484699    8430 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 05:00:14.484706    8430 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 05:00:14.484709    8430 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 05:00:14.484729    8430 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 05:00:14.487604    8430 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 05:00:14.487901    8430 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-383000" does not appear in /Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:00:14.487995    8430 kubeconfig.go:62] /Users/jenkins/minikube-integration/18771-6092/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-383000" cluster setting kubeconfig missing "stopped-upgrade-383000" context setting]
	I0429 05:00:14.488208    8430 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/kubeconfig: {Name:mkc4105502c44b2331a2dd91226134a74ad93594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:00:14.488633    8430 kapi.go:59] client config for stopped-upgrade-383000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/client.key", CAFile:"/Users/jenkins/minikube-integration/18771-6092/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10184fcb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 05:00:14.489089    8430 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 05:00:14.491715    8430 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-383000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0429 05:00:14.491720    8430 kubeadm.go:1154] stopping kube-system containers ...
	I0429 05:00:14.491760    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 05:00:14.510801    8430 docker.go:483] Stopping containers: [354faa34ac46 80cae0f8410a 63c844e608a1 524a65bbf479 dbef1337b10c e5b938769f45 f4864e330600 dad4d6abc111]
	I0429 05:00:14.510862    8430 ssh_runner.go:195] Run: docker stop 354faa34ac46 80cae0f8410a 63c844e608a1 524a65bbf479 dbef1337b10c e5b938769f45 f4864e330600 dad4d6abc111
	I0429 05:00:14.526008    8430 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 05:00:14.531369    8430 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 05:00:14.534608    8430 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 05:00:14.534620    8430 kubeadm.go:156] found existing configuration files:
	
	I0429 05:00:14.534644    8430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/admin.conf
	I0429 05:00:14.537143    8430 kubeadm.go:162] "https://control-plane.minikube.internal:51384" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 05:00:14.537164    8430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 05:00:14.539861    8430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/kubelet.conf
	I0429 05:00:14.542834    8430 kubeadm.go:162] "https://control-plane.minikube.internal:51384" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 05:00:14.542853    8430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 05:00:14.545408    8430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/controller-manager.conf
	I0429 05:00:14.547753    8430 kubeadm.go:162] "https://control-plane.minikube.internal:51384" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 05:00:14.547774    8430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 05:00:14.550633    8430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/scheduler.conf
	I0429 05:00:14.553021    8430 kubeadm.go:162] "https://control-plane.minikube.internal:51384" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 05:00:14.553043    8430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 05:00:14.555706    8430 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 05:00:14.558769    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 05:00:14.582079    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 05:00:14.938672    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 05:00:15.053144    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 05:00:15.073817    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 05:00:15.094035    8430 api_server.go:52] waiting for apiserver process to appear ...
	I0429 05:00:15.094106    8430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 05:00:15.595399    8430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 05:00:16.096213    8430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 05:00:16.100543    8430 api_server.go:72] duration metric: took 1.006511625s to wait for apiserver process to appear ...
	I0429 05:00:16.100551    8430 api_server.go:88] waiting for apiserver healthz status ...
	I0429 05:00:16.100560    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:21.102753    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:21.102882    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:27.037071    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:26.103823    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:26.103927    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:32.039863    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:32.039963    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:00:32.051427    8269 logs.go:276] 2 containers: [20e5fdcd56e8 fa1182944783]
	I0429 05:00:32.051494    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:00:32.063565    8269 logs.go:276] 2 containers: [2ab3e4df63bd d445bd598284]
	I0429 05:00:32.063630    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:00:32.073631    8269 logs.go:276] 1 containers: [bad32a53115b]
	I0429 05:00:32.073691    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:00:32.084353    8269 logs.go:276] 2 containers: [9859f21707de cbd2ba51cafa]
	I0429 05:00:32.084408    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:00:32.095003    8269 logs.go:276] 1 containers: [1cab5e65caa3]
	I0429 05:00:32.095071    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:00:32.105874    8269 logs.go:276] 2 containers: [951503dd4353 337d2bfa0452]
	I0429 05:00:32.105944    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:00:32.116523    8269 logs.go:276] 0 containers: []
	W0429 05:00:32.116537    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:00:32.116590    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:00:32.127560    8269 logs.go:276] 2 containers: [610ab484c7e0 164f3ba6e510]
	I0429 05:00:32.127576    8269 logs.go:123] Gathering logs for etcd [2ab3e4df63bd] ...
	I0429 05:00:32.127581    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ab3e4df63bd"
	I0429 05:00:32.142656    8269 logs.go:123] Gathering logs for coredns [bad32a53115b] ...
	I0429 05:00:32.142671    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bad32a53115b"
	I0429 05:00:32.154465    8269 logs.go:123] Gathering logs for storage-provisioner [164f3ba6e510] ...
	I0429 05:00:32.154476    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 164f3ba6e510"
	I0429 05:00:32.165845    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:00:32.165856    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:00:32.189576    8269 logs.go:123] Gathering logs for kube-scheduler [9859f21707de] ...
	I0429 05:00:32.189593    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9859f21707de"
	I0429 05:00:32.202058    8269 logs.go:123] Gathering logs for kube-scheduler [cbd2ba51cafa] ...
	I0429 05:00:32.202072    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbd2ba51cafa"
	I0429 05:00:32.217165    8269 logs.go:123] Gathering logs for kube-proxy [1cab5e65caa3] ...
	I0429 05:00:32.217176    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cab5e65caa3"
	I0429 05:00:32.228964    8269 logs.go:123] Gathering logs for storage-provisioner [610ab484c7e0] ...
	I0429 05:00:32.228974    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 610ab484c7e0"
	I0429 05:00:32.240417    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:00:32.240431    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:00:32.278864    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:00:32.278959    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:00:32.279706    8269 logs.go:123] Gathering logs for kube-apiserver [20e5fdcd56e8] ...
	I0429 05:00:32.279712    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e5fdcd56e8"
	I0429 05:00:32.295573    8269 logs.go:123] Gathering logs for etcd [d445bd598284] ...
	I0429 05:00:32.295586    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d445bd598284"
	I0429 05:00:32.313934    8269 logs.go:123] Gathering logs for kube-controller-manager [951503dd4353] ...
	I0429 05:00:32.313944    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 951503dd4353"
	I0429 05:00:32.331402    8269 logs.go:123] Gathering logs for kube-controller-manager [337d2bfa0452] ...
	I0429 05:00:32.331416    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 337d2bfa0452"
	I0429 05:00:32.343725    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:00:32.343738    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:00:32.349931    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:00:32.349941    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:00:32.384209    8269 logs.go:123] Gathering logs for kube-apiserver [fa1182944783] ...
	I0429 05:00:32.384221    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1182944783"
	I0429 05:00:32.406472    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:00:32.406483    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:00:32.418061    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:00:32.418071    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:00:32.418096    8269 out.go:239] X Problems detected in kubelet:
	W0429 05:00:32.418100    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:00:32.418103    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:00:32.418114    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:00:32.418118    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:00:31.104995    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:31.105194    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:36.106595    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:36.106672    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:42.422312    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:41.108378    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:41.108442    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:47.424671    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:47.424711    8269 kubeadm.go:591] duration metric: took 4m8.729023542s to restartPrimaryControlPlane
	W0429 05:00:47.424753    8269 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 05:00:47.424769    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0429 05:00:48.395684    8269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 05:00:48.400508    8269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 05:00:48.403302    8269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 05:00:48.406071    8269 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 05:00:48.406078    8269 kubeadm.go:156] found existing configuration files:
	
	I0429 05:00:48.406099    8269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/admin.conf
	I0429 05:00:48.408660    8269 kubeadm.go:162] "https://control-plane.minikube.internal:51195" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 05:00:48.408688    8269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 05:00:48.411765    8269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/kubelet.conf
	I0429 05:00:48.414330    8269 kubeadm.go:162] "https://control-plane.minikube.internal:51195" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 05:00:48.414355    8269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 05:00:48.417709    8269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/controller-manager.conf
	I0429 05:00:48.420735    8269 kubeadm.go:162] "https://control-plane.minikube.internal:51195" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 05:00:48.420757    8269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 05:00:48.423767    8269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/scheduler.conf
	I0429 05:00:48.426428    8269 kubeadm.go:162] "https://control-plane.minikube.internal:51195" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51195 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 05:00:48.426454    8269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 05:00:48.429501    8269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 05:00:48.446003    8269 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0429 05:00:48.446036    8269 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 05:00:48.498108    8269 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 05:00:48.498236    8269 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 05:00:48.498326    8269 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 05:00:48.547913    8269 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 05:00:48.553113    8269 out.go:204]   - Generating certificates and keys ...
	I0429 05:00:48.553144    8269 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 05:00:48.553192    8269 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 05:00:48.553232    8269 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 05:00:48.553289    8269 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 05:00:48.553322    8269 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 05:00:48.553378    8269 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 05:00:48.553410    8269 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 05:00:48.553445    8269 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 05:00:48.553515    8269 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 05:00:48.553576    8269 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 05:00:48.553594    8269 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 05:00:48.553620    8269 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 05:00:46.110049    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:46.110135    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:48.727227    8269 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 05:00:48.902568    8269 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 05:00:48.983664    8269 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 05:00:49.155668    8269 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 05:00:49.185085    8269 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 05:00:49.185476    8269 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 05:00:49.185546    8269 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 05:00:49.274525    8269 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 05:00:49.278327    8269 out.go:204]   - Booting up control plane ...
	I0429 05:00:49.278389    8269 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 05:00:49.279493    8269 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 05:00:49.279527    8269 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 05:00:49.279565    8269 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 05:00:49.279692    8269 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 05:00:51.112683    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:51.112726    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:53.782149    8269 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.504308 seconds
	I0429 05:00:53.782209    8269 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 05:00:53.787843    8269 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 05:00:54.312789    8269 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 05:00:54.313081    8269 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-310000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 05:00:54.817550    8269 kubeadm.go:309] [bootstrap-token] Using token: 9k8pha.lxg4q7zdgu456eb0
	I0429 05:00:54.821531    8269 out.go:204]   - Configuring RBAC rules ...
	I0429 05:00:54.821612    8269 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 05:00:54.822412    8269 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 05:00:54.828623    8269 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 05:00:54.829698    8269 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 05:00:54.830666    8269 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 05:00:54.831641    8269 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 05:00:54.835509    8269 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 05:00:54.996704    8269 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 05:00:55.224168    8269 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 05:00:55.224578    8269 kubeadm.go:309] 
	I0429 05:00:55.224629    8269 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 05:00:55.224649    8269 kubeadm.go:309] 
	I0429 05:00:55.224715    8269 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 05:00:55.224732    8269 kubeadm.go:309] 
	I0429 05:00:55.224800    8269 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 05:00:55.224832    8269 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 05:00:55.224858    8269 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 05:00:55.224860    8269 kubeadm.go:309] 
	I0429 05:00:55.224888    8269 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 05:00:55.224890    8269 kubeadm.go:309] 
	I0429 05:00:55.224912    8269 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 05:00:55.224914    8269 kubeadm.go:309] 
	I0429 05:00:55.224955    8269 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 05:00:55.225035    8269 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 05:00:55.225083    8269 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 05:00:55.225090    8269 kubeadm.go:309] 
	I0429 05:00:55.225129    8269 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 05:00:55.225180    8269 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 05:00:55.225186    8269 kubeadm.go:309] 
	I0429 05:00:55.225249    8269 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 9k8pha.lxg4q7zdgu456eb0 \
	I0429 05:00:55.225345    8269 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4832dc51ff6d0e6d2b485eb727ddc01b0033877744e5e13a6c0f8b67a1b7145 \
	I0429 05:00:55.225360    8269 kubeadm.go:309] 	--control-plane 
	I0429 05:00:55.225368    8269 kubeadm.go:309] 
	I0429 05:00:55.225412    8269 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 05:00:55.225417    8269 kubeadm.go:309] 
	I0429 05:00:55.225483    8269 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 9k8pha.lxg4q7zdgu456eb0 \
	I0429 05:00:55.225572    8269 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4832dc51ff6d0e6d2b485eb727ddc01b0033877744e5e13a6c0f8b67a1b7145 
	I0429 05:00:55.225644    8269 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 05:00:55.225651    8269 cni.go:84] Creating CNI manager for ""
	I0429 05:00:55.225658    8269 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 05:00:55.229902    8269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 05:00:55.236884    8269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 05:00:55.239905    8269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 05:00:55.244630    8269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 05:00:55.244673    8269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 05:00:55.244703    8269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-310000 minikube.k8s.io/updated_at=2024_04_29T05_00_55_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=94a01df0d4b48636d4af3d06a53be687e06c0844 minikube.k8s.io/name=running-upgrade-310000 minikube.k8s.io/primary=true
	I0429 05:00:55.290501    8269 ops.go:34] apiserver oom_adj: -16
	I0429 05:00:55.290557    8269 kubeadm.go:1107] duration metric: took 45.920292ms to wait for elevateKubeSystemPrivileges
	W0429 05:00:55.290575    8269 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 05:00:55.290579    8269 kubeadm.go:393] duration metric: took 4m16.60843675s to StartCluster
	I0429 05:00:55.290587    8269 settings.go:142] acquiring lock: {Name:mka93054a23bdbf29aca25affe181be869710883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:00:55.290745    8269 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:00:55.291154    8269 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/kubeconfig: {Name:mkc4105502c44b2331a2dd91226134a74ad93594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:00:55.291402    8269 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:00:55.294972    8269 out.go:177] * Verifying Kubernetes components...
	I0429 05:00:55.291418    8269 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 05:00:55.291585    8269 config.go:182] Loaded profile config "running-upgrade-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 05:00:55.302878    8269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 05:00:55.302878    8269 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-310000"
	I0429 05:00:55.302877    8269 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-310000"
	I0429 05:00:55.302907    8269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-310000"
	I0429 05:00:55.302917    8269 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-310000"
	W0429 05:00:55.302921    8269 addons.go:243] addon storage-provisioner should already be in state true
	I0429 05:00:55.302934    8269 host.go:66] Checking if "running-upgrade-310000" exists ...
	I0429 05:00:55.303973    8269 kapi.go:59] client config for running-upgrade-310000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/running-upgrade-310000/client.key", CAFile:"/Users/jenkins/minikube-integration/18771-6092/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101953cb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 05:00:55.304352    8269 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-310000"
	W0429 05:00:55.304356    8269 addons.go:243] addon default-storageclass should already be in state true
	I0429 05:00:55.304363    8269 host.go:66] Checking if "running-upgrade-310000" exists ...
	I0429 05:00:55.309683    8269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 05:00:55.313971    8269 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 05:00:55.313978    8269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 05:00:55.313993    8269 sshutil.go:53] new ssh client: &{IP:localhost Port:51162 SSHKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/running-upgrade-310000/id_rsa Username:docker}
	I0429 05:00:55.314710    8269 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 05:00:55.314714    8269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 05:00:55.314717    8269 sshutil.go:53] new ssh client: &{IP:localhost Port:51162 SSHKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/running-upgrade-310000/id_rsa Username:docker}
	I0429 05:00:55.399111    8269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 05:00:55.403672    8269 api_server.go:52] waiting for apiserver process to appear ...
	I0429 05:00:55.403713    8269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 05:00:55.407397    8269 api_server.go:72] duration metric: took 115.986125ms to wait for apiserver process to appear ...
	I0429 05:00:55.407405    8269 api_server.go:88] waiting for apiserver healthz status ...
	I0429 05:00:55.407411    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:55.458166    8269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 05:00:55.461059    8269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 05:00:56.113840    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:56.113862    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:00.409271    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:00.409316    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:01.116090    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:01.116146    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:05.409687    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:05.409752    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:06.118211    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:06.118253    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:10.410088    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:10.410107    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:11.119635    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:11.119685    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:15.410497    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:15.410560    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:16.121940    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:16.122171    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:01:16.140194    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:01:16.140306    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:01:16.153403    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:01:16.153482    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:01:16.165183    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:01:16.165249    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:01:16.175871    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:01:16.175942    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:01:16.190318    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:01:16.190388    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:01:16.200982    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:01:16.201054    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:01:16.211881    8430 logs.go:276] 0 containers: []
	W0429 05:01:16.211893    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:01:16.211951    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:01:16.222292    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:01:16.222309    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:01:16.222314    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:01:16.235971    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:01:16.235994    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:01:16.250078    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:01:16.250092    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:01:16.261833    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:01:16.261844    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:01:16.272855    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:01:16.272865    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:01:16.285024    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:01:16.285035    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:01:16.324171    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:01:16.324181    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:01:16.339533    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:01:16.339549    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:01:16.358511    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:01:16.358526    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:01:16.370223    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:01:16.370237    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:01:16.385991    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:01:16.386004    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:01:16.390330    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:01:16.390337    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:01:16.491678    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:01:16.491702    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:01:16.519465    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:01:16.519476    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:01:16.531213    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:01:16.531226    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:01:16.555322    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:01:16.555331    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:01:16.574443    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:01:16.574456    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:01:19.102495    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:20.411059    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:20.411091    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:24.103013    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:24.103429    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:01:24.138266    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:01:24.138410    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:01:24.158694    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:01:24.158792    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:01:24.172793    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:01:24.172869    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:01:24.185147    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:01:24.185216    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:01:24.195876    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:01:24.195948    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:01:25.411752    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:25.411806    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0429 05:01:25.811154    8269 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0429 05:01:25.819318    8269 out.go:177] * Enabled addons: storage-provisioner
	I0429 05:01:25.827272    8269 addons.go:505] duration metric: took 30.535929958s for enable addons: enabled=[storage-provisioner]
	I0429 05:01:24.210609    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:01:24.214698    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:01:24.224636    8430 logs.go:276] 0 containers: []
	W0429 05:01:24.224649    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:01:24.224705    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:01:24.234666    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:01:24.234689    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:01:24.234694    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:01:24.259493    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:01:24.259508    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:01:24.272122    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:01:24.272134    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:01:24.283400    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:01:24.283411    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:01:24.295390    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:01:24.295406    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:01:24.299405    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:01:24.299413    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:01:24.338523    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:01:24.338534    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:01:24.362580    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:01:24.362591    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:01:24.379513    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:01:24.379529    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:01:24.395033    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:01:24.395049    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:01:24.406346    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:01:24.406356    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:01:24.423264    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:01:24.423274    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:01:24.434658    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:01:24.434668    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:01:24.459593    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:01:24.459600    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:01:24.498150    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:01:24.498169    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:01:24.511896    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:01:24.511908    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:01:24.525757    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:01:24.525771    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:01:27.039868    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:30.412687    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:30.412730    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:32.042253    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:32.042454    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:01:32.064592    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:01:32.064707    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:01:32.080007    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:01:32.080095    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:01:32.092641    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:01:32.092708    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:01:32.104029    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:01:32.104097    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:01:32.114377    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:01:32.114441    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:01:32.124589    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:01:32.124651    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:01:32.134893    8430 logs.go:276] 0 containers: []
	W0429 05:01:32.134906    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:01:32.134961    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:01:32.145346    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:01:32.145363    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:01:32.145368    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:01:32.161706    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:01:32.161716    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:01:32.173905    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:01:32.173916    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:01:32.198081    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:01:32.198088    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:01:32.209997    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:01:32.210007    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:01:32.224717    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:01:32.224727    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:01:32.250411    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:01:32.250422    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:01:32.262210    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:01:32.262222    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:01:32.273922    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:01:32.273934    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:01:32.294711    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:01:32.294723    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:01:32.306017    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:01:32.306030    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:01:32.344968    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:01:32.344982    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:01:32.359875    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:01:32.359885    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:01:32.371019    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:01:32.371031    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:01:32.406347    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:01:32.406360    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:01:32.421155    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:01:32.421168    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:01:32.438574    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:01:32.438585    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:01:35.413852    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:35.413901    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:34.944948    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:40.415253    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:40.415279    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:39.947647    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:39.947850    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:01:39.962074    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:01:39.962155    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:01:39.973614    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:01:39.973672    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:01:39.983949    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:01:39.984011    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:01:39.994105    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:01:39.994172    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:01:40.004701    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:01:40.004795    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:01:40.015279    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:01:40.015344    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:01:40.025930    8430 logs.go:276] 0 containers: []
	W0429 05:01:40.025943    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:01:40.025998    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:01:40.036312    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:01:40.036332    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:01:40.036338    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:01:40.074738    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:01:40.074749    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:01:40.086548    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:01:40.086560    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:01:40.098190    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:01:40.098201    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:01:40.143354    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:01:40.143370    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:01:40.155748    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:01:40.155759    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:01:40.166943    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:01:40.166954    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:01:40.179081    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:01:40.179093    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:01:40.193407    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:01:40.193417    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:01:40.214387    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:01:40.214397    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:01:40.225726    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:01:40.225737    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:01:40.240497    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:01:40.240507    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:01:40.264844    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:01:40.264855    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:01:40.269548    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:01:40.269555    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:01:40.294546    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:01:40.294558    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:01:40.308581    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:01:40.308592    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:01:40.323431    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:01:40.323442    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:01:42.843155    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:45.416985    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:45.417034    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:47.845553    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:47.845891    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:01:47.881537    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:01:47.881687    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:01:47.902608    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:01:47.902700    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:01:47.917429    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:01:47.917510    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:01:47.929388    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:01:47.929459    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:01:47.940175    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:01:47.940248    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:01:47.951245    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:01:47.951312    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:01:47.961192    8430 logs.go:276] 0 containers: []
	W0429 05:01:47.961205    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:01:47.961265    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:01:47.971560    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:01:47.971577    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:01:47.971582    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:01:47.986308    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:01:47.986321    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:01:47.997902    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:01:47.997916    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:01:48.036463    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:01:48.036475    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:01:48.050612    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:01:48.050622    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:01:48.074920    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:01:48.074929    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:01:48.086725    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:01:48.086735    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:01:48.097865    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:01:48.097881    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:01:48.123449    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:01:48.123457    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:01:48.135212    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:01:48.135223    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:01:48.170375    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:01:48.170391    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:01:48.182531    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:01:48.182543    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:01:48.197338    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:01:48.197349    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:01:48.202083    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:01:48.202093    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:01:48.213814    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:01:48.213825    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:01:48.228203    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:01:48.228213    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:01:48.247116    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:01:48.247131    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:01:50.419343    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:50.419389    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:50.773881    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:55.421650    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:55.421825    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:01:55.437130    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:01:55.437199    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:01:55.449449    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:01:55.449525    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:01:55.460178    8269 logs.go:276] 2 containers: [8b104f5475d1 cf01071a108e]
	I0429 05:01:55.460247    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:01:55.471253    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:01:55.471317    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:01:55.493896    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:01:55.493965    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:01:55.508780    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:01:55.508861    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:01:55.519221    8269 logs.go:276] 0 containers: []
	W0429 05:01:55.519236    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:01:55.519300    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:01:55.529690    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:01:55.529706    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:01:55.529712    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:01:55.534245    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:01:55.534254    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:01:55.568824    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:01:55.568837    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:01:55.585667    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:01:55.585679    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:01:55.597098    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:01:55.597109    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:01:55.615666    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:01:55.615679    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:01:55.627051    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:01:55.627062    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:01:55.644161    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:01:55.644254    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:01:55.660811    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:01:55.660818    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:01:55.674372    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:01:55.674387    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:01:55.685782    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:01:55.685794    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:01:55.697108    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:01:55.697118    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:01:55.719458    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:01:55.719468    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:01:55.737133    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:01:55.737144    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:01:55.760462    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:01:55.760471    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:01:55.760507    8269 out.go:239] X Problems detected in kubelet:
	W0429 05:01:55.760511    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:01:55.760514    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:01:55.760519    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:01:55.760522    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:01:55.776210    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:55.776288    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:01:55.786657    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:01:55.786735    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:01:55.796877    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:01:55.796953    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:01:55.807996    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:01:55.808057    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:01:55.819809    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:01:55.819879    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:01:55.830109    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:01:55.830168    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:01:55.840888    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:01:55.840968    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:01:55.850771    8430 logs.go:276] 0 containers: []
	W0429 05:01:55.850782    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:01:55.850833    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:01:55.861022    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:01:55.861042    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:01:55.861048    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:01:55.873143    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:01:55.873153    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:01:55.884532    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:01:55.884547    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:01:55.897465    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:01:55.897477    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:01:55.901918    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:01:55.901925    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:01:55.930551    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:01:55.930565    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:01:55.941956    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:01:55.941967    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:01:55.956758    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:01:55.956772    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:01:55.971859    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:01:55.971869    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:01:55.997390    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:01:55.997397    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:01:56.032413    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:01:56.032427    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:01:56.046734    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:01:56.046745    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:01:56.060981    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:01:56.060994    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:01:56.075304    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:01:56.075314    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:01:56.111761    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:01:56.111772    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:01:56.129280    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:01:56.129291    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:01:56.144154    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:01:56.144164    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:01:58.657869    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:03.660240    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:03.660398    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:03.677927    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:02:03.678004    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:03.691682    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:02:03.691749    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:03.702233    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:02:03.702301    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:03.712905    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:02:03.712970    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:03.723392    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:02:03.723464    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:03.736623    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:02:03.736701    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:03.747389    8430 logs.go:276] 0 containers: []
	W0429 05:02:03.747401    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:03.747463    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:03.760880    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:02:03.760897    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:02:03.760903    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:02:03.772595    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:03.772604    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:02:03.810212    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:02:03.810220    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:02:03.834403    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:02:03.834414    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:02:03.848685    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:02:03.848695    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:02:03.871240    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:02:03.871251    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:02:03.885869    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:03.885882    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:02:03.921675    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:02:03.921685    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:02:03.937036    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:02:03.937047    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:02:03.955359    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:02:03.955373    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:02:03.970093    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:03.970104    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:03.993900    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:02:03.993910    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:04.005457    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:04.005469    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:04.009841    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:02:04.009848    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:02:04.021804    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:02:04.021815    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:02:04.037616    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:02:04.037627    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:02:04.055580    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:02:04.055591    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:02:05.764680    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:06.570693    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:10.767009    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:10.767191    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:10.783086    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:02:10.783175    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:10.795883    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:02:10.795961    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:10.806970    8269 logs.go:276] 2 containers: [8b104f5475d1 cf01071a108e]
	I0429 05:02:10.807037    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:10.817513    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:02:10.817583    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:10.827455    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:02:10.827519    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:10.837783    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:02:10.837854    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:10.847667    8269 logs.go:276] 0 containers: []
	W0429 05:02:10.847679    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:10.847736    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:10.862275    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:02:10.862288    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:10.862293    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:10.867109    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:02:10.867117    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:02:10.878570    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:02:10.878581    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:02:10.889945    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:02:10.889959    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:02:10.904166    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:10.904176    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:10.928715    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:02:10.928722    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:10.940433    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:10.940444    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:02:10.958719    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:02:10.958842    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:02:10.975318    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:10.975325    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:02:11.011273    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:02:11.011284    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:02:11.025673    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:02:11.025683    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:02:11.039487    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:02:11.039498    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:02:11.050804    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:02:11.050816    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:02:11.068546    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:02:11.068557    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:02:11.085547    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:11.085559    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:02:11.085585    8269 out.go:239] X Problems detected in kubelet:
	W0429 05:02:11.085590    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:02:11.085597    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:02:11.085601    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:11.085605    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:02:11.573057    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:11.573247    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:11.591177    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:02:11.591261    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:11.605028    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:02:11.605093    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:11.616407    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:02:11.616471    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:11.627064    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:02:11.627136    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:11.638181    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:02:11.638243    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:11.648660    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:02:11.648730    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:11.658825    8430 logs.go:276] 0 containers: []
	W0429 05:02:11.658836    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:11.658891    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:11.669559    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:02:11.669578    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:02:11.669583    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:11.682366    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:11.682377    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:02:11.720595    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:02:11.720606    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:02:11.735189    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:02:11.735200    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:02:11.747637    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:02:11.747648    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:02:11.764983    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:02:11.764995    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:02:11.776296    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:11.776308    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:02:11.812436    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:02:11.812449    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:02:11.827339    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:02:11.827352    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:02:11.841758    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:02:11.841769    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:02:11.852882    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:02:11.852894    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:02:11.864597    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:02:11.864612    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:02:11.877245    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:11.877258    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:11.881963    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:02:11.881969    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:02:11.895914    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:11.895924    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:11.919890    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:02:11.919897    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:02:11.946813    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:02:11.946826    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:02:14.469203    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:21.088597    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:19.471634    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:19.471811    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:19.487798    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:02:19.487883    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:19.501217    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:02:19.501284    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:19.512270    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:02:19.512344    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:19.522827    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:02:19.522905    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:19.533576    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:02:19.533649    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:19.545513    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:02:19.545587    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:19.555799    8430 logs.go:276] 0 containers: []
	W0429 05:02:19.555811    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:19.555870    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:19.565691    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:02:19.565710    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:02:19.565716    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:02:19.579549    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:02:19.579560    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:02:19.591730    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:02:19.591741    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:02:19.606612    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:02:19.606621    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:02:19.621641    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:02:19.621652    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:02:19.634443    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:02:19.634457    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:02:19.648166    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:02:19.648180    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:02:19.673602    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:02:19.673613    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:02:19.689178    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:02:19.689190    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:02:19.700609    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:02:19.700622    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:02:19.718068    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:02:19.718079    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:19.730391    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:19.730404    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:02:19.768137    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:19.768146    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:19.772473    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:19.772488    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:02:19.807096    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:02:19.807109    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:02:19.820800    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:02:19.820809    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:02:19.832357    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:19.832373    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:22.357188    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:26.090914    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:26.091093    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:26.108446    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:02:26.108529    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:26.123208    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:02:26.123282    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:26.135177    8269 logs.go:276] 2 containers: [8b104f5475d1 cf01071a108e]
	I0429 05:02:26.135248    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:26.145196    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:02:26.145261    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:26.156996    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:02:26.157065    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:26.167815    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:02:26.167885    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:26.178587    8269 logs.go:276] 0 containers: []
	W0429 05:02:26.178597    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:26.178654    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:26.188966    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:02:26.188983    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:02:26.188989    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:02:26.203343    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:02:26.203352    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:02:26.217970    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:02:26.217980    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:02:26.229450    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:02:26.229460    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:02:26.248357    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:02:26.248368    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:02:26.259788    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:02:26.259803    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:26.271099    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:26.271112    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:02:26.288279    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:02:26.288376    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:02:26.304559    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:26.304565    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:02:26.339786    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:02:26.339801    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:02:26.353399    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:02:26.353410    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:02:26.371195    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:02:26.371204    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:02:26.388935    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:26.388945    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:26.411990    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:26.411998    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:26.416175    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:26.416185    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:02:26.416209    8269 out.go:239] X Problems detected in kubelet:
	W0429 05:02:26.416213    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:02:26.416217    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:02:26.416220    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:26.416223    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:02:27.359623    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:27.359763    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:27.373810    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:02:27.373900    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:27.385510    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:02:27.385581    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:27.395670    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:02:27.395733    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:27.405545    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:02:27.405617    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:27.416134    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:02:27.416191    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:27.426539    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:02:27.426611    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:27.436780    8430 logs.go:276] 0 containers: []
	W0429 05:02:27.436791    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:27.436845    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:27.447677    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:02:27.447700    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:27.447706    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:27.451982    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:02:27.451988    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:02:27.466344    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:02:27.466355    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:02:27.482067    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:02:27.482077    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:27.496274    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:02:27.496288    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:02:27.511414    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:27.511430    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:27.535017    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:02:27.535025    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:02:27.559676    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:02:27.559685    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:02:27.574442    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:02:27.574454    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:02:27.586415    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:02:27.586426    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:02:27.604032    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:02:27.604042    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:02:27.615284    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:02:27.615295    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:02:27.626323    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:27.626336    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:02:27.664134    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:27.664144    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:02:27.699050    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:02:27.699063    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:02:27.713420    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:02:27.713431    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:02:27.725419    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:02:27.725430    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:02:30.245700    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:36.420099    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:35.247988    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:35.248151    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:35.259460    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:02:35.259528    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:35.269989    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:02:35.270055    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:35.280633    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:02:35.280704    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:35.291205    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:02:35.291279    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:35.302365    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:02:35.302432    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:35.313000    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:02:35.313074    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:35.325868    8430 logs.go:276] 0 containers: []
	W0429 05:02:35.325882    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:35.325948    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:35.341151    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:02:35.341172    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:02:35.341178    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:02:35.352331    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:02:35.352345    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:02:35.363239    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:02:35.363251    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:35.376344    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:02:35.376359    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:02:35.392132    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:02:35.392145    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:02:35.403852    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:02:35.403864    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:02:35.421580    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:02:35.421594    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:02:35.433335    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:35.433345    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:35.458317    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:02:35.458323    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:02:35.472497    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:35.472507    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:35.476624    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:35.476631    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:02:35.511801    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:02:35.511810    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:02:35.527066    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:02:35.527078    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:02:35.539375    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:02:35.539386    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:02:35.554679    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:35.554692    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:02:35.590745    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:02:35.590755    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:02:35.604449    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:02:35.604483    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:02:38.131292    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:41.422486    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:41.422896    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:41.473895    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:02:41.474016    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:41.497410    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:02:41.497489    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:41.511379    8269 logs.go:276] 2 containers: [8b104f5475d1 cf01071a108e]
	I0429 05:02:41.511458    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:41.522019    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:02:41.522085    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:41.534587    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:02:41.534661    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:41.547273    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:02:41.547338    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:41.561433    8269 logs.go:276] 0 containers: []
	W0429 05:02:41.561444    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:41.561504    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:41.572044    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:02:41.572058    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:41.572063    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:41.576607    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:41.576616    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:02:41.645080    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:02:41.645092    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:02:41.661207    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:02:41.661219    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:02:41.672557    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:02:41.672569    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:02:41.684613    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:02:41.684628    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:02:41.705877    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:02:41.705888    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:02:41.717121    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:41.717134    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:41.742144    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:41.742155    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:02:41.759969    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:02:41.760064    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:02:41.776832    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:02:41.776841    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:02:41.798418    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:02:41.798430    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:02:41.813122    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:02:41.813133    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:02:41.828161    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:02:41.828179    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:41.839330    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:41.839339    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:02:41.839366    8269 out.go:239] X Problems detected in kubelet:
	W0429 05:02:41.839374    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:02:41.839378    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:02:41.839382    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:41.839476    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:02:43.133816    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:43.133993    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:43.151945    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:02:43.152024    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:43.168429    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:02:43.168504    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:43.179521    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:02:43.179589    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:43.189672    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:02:43.189730    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:43.199718    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:02:43.199782    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:43.219222    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:02:43.219289    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:43.229559    8430 logs.go:276] 0 containers: []
	W0429 05:02:43.229570    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:43.229624    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:43.249681    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:02:43.249700    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:43.249706    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:43.272431    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:43.272439    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:02:43.308309    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:02:43.308316    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:02:43.322050    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:02:43.322060    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:02:43.334118    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:02:43.334129    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:43.346792    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:43.346804    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:02:43.381872    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:02:43.381882    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:02:43.395778    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:02:43.395789    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:02:43.407612    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:02:43.407622    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:02:43.422827    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:02:43.422838    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:02:43.434949    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:43.434961    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:43.439376    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:02:43.439385    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:02:43.467408    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:02:43.467422    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:02:43.482070    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:02:43.482080    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:02:43.497882    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:02:43.497892    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:02:43.509253    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:02:43.509264    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:02:43.521308    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:02:43.521318    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:02:46.040623    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:51.843625    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:51.043002    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:51.043273    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:51.078146    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:02:51.078260    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:51.094569    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:02:51.094652    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:51.107159    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:02:51.107234    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:51.118892    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:02:51.118958    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:51.129375    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:02:51.129444    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:51.140156    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:02:51.140216    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:51.153672    8430 logs.go:276] 0 containers: []
	W0429 05:02:51.153685    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:51.153748    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:51.164285    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:02:51.164303    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:02:51.164309    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:02:51.187952    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:51.187962    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:51.192000    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:02:51.192006    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:02:51.205595    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:02:51.205605    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:02:51.219534    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:02:51.219545    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:02:51.234592    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:02:51.234603    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:02:51.249490    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:51.249503    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:02:51.287357    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:51.287367    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:02:51.325936    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:02:51.325948    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:02:51.355436    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:02:51.355448    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:51.367216    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:02:51.367230    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:02:51.378549    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:02:51.378561    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:02:51.394382    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:02:51.394397    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:02:51.406208    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:51.406219    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:51.429877    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:02:51.429885    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:02:51.444619    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:02:51.444634    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:02:51.456658    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:02:51.456673    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:02:53.970114    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:56.846532    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:56.846877    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:56.882834    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:02:56.882967    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:56.914232    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:02:56.914311    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:56.927289    8269 logs.go:276] 2 containers: [8b104f5475d1 cf01071a108e]
	I0429 05:02:56.927363    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:56.938907    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:02:56.938971    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:56.949616    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:02:56.949680    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:56.960223    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:02:56.960292    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:56.970831    8269 logs.go:276] 0 containers: []
	W0429 05:02:56.970846    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:56.970901    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:56.981465    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:02:56.981480    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:56.981484    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:57.006889    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:57.006899    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:02:57.043427    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:02:57.043442    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:02:57.057674    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:02:57.057687    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:02:57.071868    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:02:57.071878    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:02:57.084266    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:02:57.084277    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:02:57.098270    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:02:57.098284    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:02:57.116529    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:02:57.116538    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:02:57.128306    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:02:57.128318    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:57.139790    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:57.139800    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:02:57.158828    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:02:57.158923    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:02:57.175348    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:57.175355    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:57.180222    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:02:57.180231    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:02:57.192230    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:02:57.192243    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:02:57.207399    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:57.207408    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:02:57.207432    8269 out.go:239] X Problems detected in kubelet:
	W0429 05:02:57.207436    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:02:57.207440    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:02:57.207444    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:57.207447    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:02:58.972648    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:58.973058    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:59.009075    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:02:59.009211    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:59.029243    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:02:59.029327    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:59.043874    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:02:59.043955    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:59.056664    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:02:59.056732    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:59.067117    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:02:59.067187    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:59.081483    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:02:59.081555    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:59.092441    8430 logs.go:276] 0 containers: []
	W0429 05:02:59.092452    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:59.092506    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:59.107690    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:02:59.107709    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:02:59.107714    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:02:59.126060    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:02:59.126075    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:02:59.148919    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:02:59.148931    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:02:59.161770    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:02:59.161781    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:02:59.173069    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:02:59.173081    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:02:59.198844    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:02:59.198854    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:02:59.211018    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:02:59.214442    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:02:59.229818    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:02:59.229828    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:02:59.241317    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:59.241328    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:59.264698    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:59.264705    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:02:59.305277    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:59.305290    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:59.310673    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:02:59.310681    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:02:59.326350    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:02:59.326361    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:02:59.345717    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:02:59.345731    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:02:59.361664    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:02:59.361678    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:02:59.373051    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:02:59.373064    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:59.384897    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:59.384915    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:01.921741    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:07.210673    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:06.923990    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:03:06.924221    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:03:06.940336    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:03:06.940430    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:03:06.953454    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:03:06.953520    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:03:06.965952    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:03:06.966027    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:03:06.977847    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:03:06.977927    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:03:06.988624    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:03:06.988696    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:03:06.999425    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:03:06.999495    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:03:07.009291    8430 logs.go:276] 0 containers: []
	W0429 05:03:07.009303    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:03:07.009355    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:03:07.021491    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:03:07.021510    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:03:07.021515    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:03:07.032606    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:03:07.032622    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:03:07.048314    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:03:07.048324    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:03:07.085636    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:03:07.085648    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:03:07.099543    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:03:07.099560    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:03:07.117733    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:03:07.117744    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:03:07.129182    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:03:07.129197    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:03:07.151730    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:03:07.151740    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:03:07.166904    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:03:07.166919    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:03:07.171627    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:03:07.171654    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:07.207673    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:03:07.207684    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:03:07.221754    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:03:07.221765    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:03:07.246001    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:03:07.246017    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:03:07.262539    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:03:07.262550    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:03:07.275225    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:03:07.275238    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:03:07.288477    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:03:07.288488    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:03:07.299370    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:03:07.299381    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:03:12.212978    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:03:12.213390    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:03:12.251919    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:03:12.252057    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:03:12.273512    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:03:12.273631    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:03:12.288414    8269 logs.go:276] 4 containers: [d67d6506457f 279483b19c81 8b104f5475d1 cf01071a108e]
	I0429 05:03:12.288487    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:03:12.300929    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:03:12.300992    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:03:12.312191    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:03:12.312262    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:03:12.323612    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:03:12.323683    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:03:12.334085    8269 logs.go:276] 0 containers: []
	W0429 05:03:12.334098    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:03:12.334161    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:03:12.349274    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:03:12.349292    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:03:12.349298    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:03:12.368217    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:03:12.368310    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:03:12.384614    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:03:12.384620    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:03:12.389280    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:03:12.389290    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:03:12.401308    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:03:12.401319    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:03:12.424746    8269 logs.go:123] Gathering logs for coredns [279483b19c81] ...
	I0429 05:03:12.424757    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483b19c81"
	I0429 05:03:12.435969    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:03:12.435981    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:03:12.448026    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:03:12.448037    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:03:12.460228    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:03:12.460243    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:03:12.473784    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:03:12.473798    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:03:12.488018    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:03:12.488027    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:03:12.504224    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:03:12.504239    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:03:12.525290    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:03:12.525305    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:12.560335    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:03:12.560347    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:03:12.574825    8269 logs.go:123] Gathering logs for coredns [d67d6506457f] ...
	I0429 05:03:12.574835    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67d6506457f"
	I0429 05:03:12.593783    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:03:12.593795    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:03:12.611090    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:03:12.611099    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:03:12.611124    8269 out.go:239] X Problems detected in kubelet:
	W0429 05:03:12.611129    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:03:12.611133    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:03:12.611157    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:03:12.611162    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:03:09.813792    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:14.815627    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:03:14.815803    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:03:14.835043    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:03:14.835124    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:03:14.849159    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:03:14.849227    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:03:14.861058    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:03:14.861133    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:03:14.871560    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:03:14.871628    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:03:14.882225    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:03:14.882287    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:03:14.892744    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:03:14.892814    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:03:14.902451    8430 logs.go:276] 0 containers: []
	W0429 05:03:14.902463    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:03:14.902513    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:03:14.914621    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:03:14.914636    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:03:14.914641    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:03:14.925650    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:03:14.925664    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:03:14.950565    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:03:14.950575    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:03:14.962372    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:03:14.962383    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:03:14.977533    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:03:14.977549    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:03:14.989861    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:03:14.989871    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:03:15.004110    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:03:15.004125    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:03:15.016018    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:03:15.016030    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:03:15.036278    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:03:15.036289    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:03:15.048213    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:03:15.048224    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:03:15.072075    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:03:15.072083    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:03:15.076150    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:03:15.076156    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:03:15.092354    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:03:15.092368    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:03:15.112430    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:03:15.112443    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:03:15.124472    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:03:15.124482    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:03:15.163187    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:03:15.163201    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:15.199920    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:03:15.199931    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:03:17.716736    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:22.615366    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:22.718975    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:03:22.719179    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:03:22.737225    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:03:22.737319    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:03:22.750330    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:03:22.750404    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:03:22.762028    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:03:22.762095    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:03:22.772088    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:03:22.772177    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:03:22.783249    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:03:22.783317    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:03:22.794025    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:03:22.794087    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:03:22.804744    8430 logs.go:276] 0 containers: []
	W0429 05:03:22.804755    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:03:22.804814    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:03:22.815288    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:03:22.815305    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:03:22.815310    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:03:22.829458    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:03:22.829468    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:03:22.840490    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:03:22.840499    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:03:22.855008    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:03:22.855023    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:03:22.865986    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:03:22.865995    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:03:22.888499    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:03:22.888507    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:22.924087    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:03:22.924098    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:03:22.935791    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:03:22.935800    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:03:22.950863    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:03:22.950877    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:03:22.964211    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:03:22.964223    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:03:22.983005    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:03:22.983016    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:03:22.994471    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:03:22.994481    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:03:23.031025    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:03:23.031034    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:03:23.035346    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:03:23.035353    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:03:23.049064    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:03:23.049074    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:03:23.074270    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:03:23.074281    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:03:23.092453    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:03:23.092467    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:03:27.617810    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:03:27.617963    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:03:27.629435    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:03:27.629513    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:03:27.639830    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:03:27.639897    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:03:27.650611    8269 logs.go:276] 4 containers: [d67d6506457f 279483b19c81 8b104f5475d1 cf01071a108e]
	I0429 05:03:27.650687    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:03:27.661303    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:03:27.661376    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:03:27.671427    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:03:27.671485    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:03:27.681771    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:03:27.681840    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:03:27.696255    8269 logs.go:276] 0 containers: []
	W0429 05:03:27.696266    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:03:27.696321    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:03:27.709676    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:03:27.709695    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:03:27.709700    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:27.745207    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:03:27.745228    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:03:27.759518    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:03:27.759529    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:03:27.773896    8269 logs.go:123] Gathering logs for coredns [d67d6506457f] ...
	I0429 05:03:27.773907    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67d6506457f"
	I0429 05:03:27.785586    8269 logs.go:123] Gathering logs for coredns [279483b19c81] ...
	I0429 05:03:27.785598    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483b19c81"
	I0429 05:03:27.796671    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:03:27.796682    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:03:27.809293    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:03:27.809304    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:03:27.820701    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:03:27.820711    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:03:27.838672    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:03:27.838767    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:03:27.855094    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:03:27.855102    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:03:27.870559    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:03:27.870569    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:03:27.882770    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:03:27.882783    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:03:27.899854    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:03:27.899864    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:03:27.924465    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:03:27.924473    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:03:27.929282    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:03:27.929288    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:03:27.941516    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:03:27.941529    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:03:27.953103    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:03:27.953114    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:03:27.953140    8269 out.go:239] X Problems detected in kubelet:
	W0429 05:03:27.953145    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:03:27.953148    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:03:27.953153    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:03:27.953156    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:03:25.608362    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:30.609118    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:03:30.609310    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:03:30.620699    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:03:30.620775    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:03:30.631304    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:03:30.631365    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:03:30.641779    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:03:30.641843    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:03:30.653090    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:03:30.653155    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:03:30.663315    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:03:30.663379    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:03:30.674194    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:03:30.674255    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:03:30.685049    8430 logs.go:276] 0 containers: []
	W0429 05:03:30.685059    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:03:30.685117    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:03:30.695884    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:03:30.695903    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:03:30.695909    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:03:30.707666    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:03:30.707678    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:03:30.724683    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:03:30.724694    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:03:30.736062    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:03:30.736072    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:03:30.747619    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:03:30.747629    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:03:30.784144    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:03:30.784153    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:30.819444    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:03:30.819455    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:03:30.833416    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:03:30.833425    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:03:30.848757    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:03:30.848771    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:03:30.873059    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:03:30.873070    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:03:30.887460    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:03:30.887472    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:03:30.898742    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:03:30.898753    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:03:30.921263    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:03:30.921271    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:03:30.932839    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:03:30.932854    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:03:30.936843    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:03:30.936850    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:03:30.950569    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:03:30.950579    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:03:30.961673    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:03:30.961685    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:03:33.478880    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:37.957307    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:38.480042    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:03:38.480296    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:03:38.509538    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:03:38.509647    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:03:38.528563    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:03:38.528642    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:03:38.542494    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:03:38.542575    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:03:38.557381    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:03:38.557465    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:03:38.567503    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:03:38.567570    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:03:38.578752    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:03:38.578821    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:03:38.589293    8430 logs.go:276] 0 containers: []
	W0429 05:03:38.589304    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:03:38.589361    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:03:38.599361    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:03:38.599378    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:03:38.599385    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:38.660532    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:03:38.660547    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:03:38.674967    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:03:38.674978    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:03:38.689896    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:03:38.689907    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:03:38.704489    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:03:38.704501    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:03:38.708593    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:03:38.708600    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:03:38.739444    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:03:38.739455    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:03:38.761983    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:03:38.761997    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:03:38.773480    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:03:38.773494    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:03:38.795316    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:03:38.795323    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:03:38.809389    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:03:38.809401    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:03:38.820085    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:03:38.820097    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:03:38.832052    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:03:38.832063    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:03:38.844894    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:03:38.844905    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:03:38.856331    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:03:38.856342    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:03:38.893526    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:03:38.893535    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:03:38.905806    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:03:38.905817    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:03:42.959678    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:03:42.959958    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:03:42.980296    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:03:42.980396    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:03:42.995927    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:03:42.996013    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:03:43.008893    8269 logs.go:276] 4 containers: [d67d6506457f 279483b19c81 8b104f5475d1 cf01071a108e]
	I0429 05:03:43.008969    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:03:43.020693    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:03:43.020766    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:03:43.031045    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:03:43.031112    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:03:43.041230    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:03:43.041307    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:03:43.051784    8269 logs.go:276] 0 containers: []
	W0429 05:03:43.051798    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:03:43.051861    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:03:43.066435    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:03:43.066450    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:03:43.066455    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:03:43.083225    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:03:43.083321    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:03:43.099739    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:03:43.099749    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:43.135129    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:03:43.135140    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:03:43.150909    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:03:43.150920    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:03:43.162702    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:03:43.162712    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:03:43.174209    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:03:43.174220    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:03:43.188826    8269 logs.go:123] Gathering logs for coredns [d67d6506457f] ...
	I0429 05:03:43.188838    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67d6506457f"
	I0429 05:03:43.201009    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:03:43.201018    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:03:43.219050    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:03:43.219059    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:03:43.243267    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:03:43.243274    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:03:43.255036    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:03:43.255048    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:03:43.266604    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:03:43.266615    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:03:43.278112    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:03:43.278124    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:03:43.282673    8269 logs.go:123] Gathering logs for coredns [279483b19c81] ...
	I0429 05:03:43.282682    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483b19c81"
	I0429 05:03:43.295238    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:03:43.295248    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:03:43.342576    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:03:43.342586    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:03:43.342611    8269 out.go:239] X Problems detected in kubelet:
	W0429 05:03:43.342615    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:03:43.342626    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:03:43.342633    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:03:43.342638    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:03:41.423145    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:46.425782    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:03:46.425924    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:03:46.439791    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:03:46.439865    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:03:46.453171    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:03:46.453239    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:03:46.463968    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:03:46.464033    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:03:46.475061    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:03:46.475130    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:03:46.486009    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:03:46.486075    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:03:46.497164    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:03:46.497236    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:03:46.507287    8430 logs.go:276] 0 containers: []
	W0429 05:03:46.507298    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:03:46.507350    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:03:46.518239    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:03:46.518257    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:03:46.518262    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:03:46.532338    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:03:46.532354    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:03:46.544167    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:03:46.544181    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:03:46.559586    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:03:46.559597    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:03:46.571449    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:03:46.571461    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:03:46.582915    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:03:46.582928    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:03:46.595865    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:03:46.595876    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:03:46.610679    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:03:46.610690    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:03:46.622287    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:03:46.622299    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:03:46.637447    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:03:46.637459    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:03:46.674875    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:03:46.674885    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:46.710240    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:03:46.710251    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:03:46.734679    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:03:46.734689    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:03:46.748728    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:03:46.748738    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:03:46.760977    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:03:46.760987    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:03:46.781269    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:03:46.781281    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:03:46.785344    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:03:46.785351    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:03:53.346870    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:49.311015    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:58.349286    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:03:58.349618    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:03:58.383112    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:03:58.383242    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:03:58.404066    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:03:58.404165    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:03:58.425213    8269 logs.go:276] 4 containers: [d67d6506457f 279483b19c81 8b104f5475d1 cf01071a108e]
	I0429 05:03:58.425292    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:03:58.442497    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:03:58.442568    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:03:58.453639    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:03:58.453709    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:03:58.464951    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:03:58.465018    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:03:58.475933    8269 logs.go:276] 0 containers: []
	W0429 05:03:58.475947    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:03:58.476010    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:03:58.499173    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:03:58.499196    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:03:58.499202    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:03:58.518382    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:03:58.518395    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:03:58.543456    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:03:58.543467    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:03:58.561631    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:03:58.561724    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:03:58.578100    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:03:58.578107    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:03:58.582259    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:03:58.582265    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:03:58.596279    8269 logs.go:123] Gathering logs for coredns [d67d6506457f] ...
	I0429 05:03:58.596292    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67d6506457f"
	I0429 05:03:58.607981    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:03:58.607994    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:03:58.635719    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:03:58.635730    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:03:58.650998    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:03:58.651008    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:03:58.662844    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:03:58.662853    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:54.313821    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:03:54.314252    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:03:54.357117    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:03:54.357256    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:03:54.379015    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:03:54.379117    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:03:54.395841    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:03:54.395917    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:03:54.408259    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:03:54.408335    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:03:54.419012    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:03:54.419083    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:03:54.429783    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:03:54.429853    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:03:54.440880    8430 logs.go:276] 0 containers: []
	W0429 05:03:54.440892    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:03:54.440954    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:03:54.451366    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:03:54.451383    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:03:54.451389    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:03:54.469452    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:03:54.469461    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:03:54.484276    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:03:54.484288    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:54.519360    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:03:54.519371    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:03:54.534430    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:03:54.534441    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:03:54.545822    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:03:54.545837    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:03:54.557821    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:03:54.557832    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:03:54.576939    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:03:54.576948    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:03:54.588419    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:03:54.588430    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:03:54.610594    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:03:54.610603    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:03:54.624213    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:03:54.624223    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:03:54.628423    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:03:54.628430    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:03:54.654338    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:03:54.654350    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:03:54.666066    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:03:54.666079    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:03:54.681408    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:03:54.681419    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:03:54.693980    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:03:54.693990    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:03:54.731951    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:03:54.731958    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:03:57.245454    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:58.700216    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:03:58.700229    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:03:58.714479    8269 logs.go:123] Gathering logs for coredns [279483b19c81] ...
	I0429 05:03:58.714490    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483b19c81"
	I0429 05:03:58.726335    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:03:58.726348    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:03:58.738330    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:03:58.738341    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:03:58.750187    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:03:58.750199    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:03:58.763722    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:03:58.763732    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:03:58.763758    8269 out.go:239] X Problems detected in kubelet:
	W0429 05:03:58.763763    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:03:58.763767    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:03:58.763772    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:03:58.763775    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:04:02.246048    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:02.246516    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:04:02.286761    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:04:02.286893    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:04:02.308799    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:04:02.308907    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:04:02.324112    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:04:02.324192    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:04:02.337520    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:04:02.337598    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:04:02.348181    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:04:02.348244    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:04:02.358666    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:04:02.358730    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:04:02.369585    8430 logs.go:276] 0 containers: []
	W0429 05:04:02.369600    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:04:02.369674    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:04:02.380567    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:04:02.380584    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:04:02.380590    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:04:02.416583    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:04:02.416591    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:04:02.452010    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:04:02.452022    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:04:02.467238    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:04:02.467248    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:04:02.483661    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:04:02.483674    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:04:02.495336    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:04:02.495348    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:04:02.499905    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:04:02.499912    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:04:02.511032    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:04:02.511045    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:04:02.522800    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:04:02.522810    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:04:02.534559    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:04:02.534577    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:04:02.550578    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:04:02.550589    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:04:02.562432    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:04:02.562442    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:04:02.580204    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:04:02.580214    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:04:02.602872    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:04:02.602884    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:04:02.617149    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:04:02.617162    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:04:02.641884    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:04:02.641894    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:04:02.656360    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:04:02.656371    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:04:05.170380    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:08.767904    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:10.172687    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:10.172871    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:04:10.192820    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:04:10.192908    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:04:10.207519    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:04:10.207599    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:04:10.219771    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:04:10.219846    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:04:10.230682    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:04:10.230747    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:04:10.245700    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:04:10.245768    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:04:10.259986    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:04:10.260050    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:04:10.270346    8430 logs.go:276] 0 containers: []
	W0429 05:04:10.270357    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:04:10.270415    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:04:10.281183    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:04:10.281200    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:04:10.281205    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:04:10.295522    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:04:10.295536    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:04:10.309893    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:04:10.309905    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:04:10.324571    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:04:10.324581    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:04:10.342994    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:04:10.343004    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:04:10.355498    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:04:10.355509    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:04:10.365090    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:04:10.365097    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:04:10.404281    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:04:10.404292    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:04:10.426370    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:04:10.426377    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:04:10.466836    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:04:10.466847    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:04:10.481818    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:04:10.481829    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:04:10.494199    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:04:10.494213    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:04:10.506042    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:04:10.506059    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:04:10.530324    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:04:10.530335    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:04:10.542210    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:04:10.542222    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:04:10.553728    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:04:10.553741    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:04:10.571890    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:04:10.571907    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:04:13.088848    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:13.770190    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:13.770358    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:04:13.784844    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:04:13.784906    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:04:13.797069    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:04:13.797161    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:04:13.808399    8269 logs.go:276] 4 containers: [d67d6506457f 279483b19c81 8b104f5475d1 cf01071a108e]
	I0429 05:04:13.808476    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:04:13.818814    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:04:13.818882    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:04:13.829467    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:04:13.829532    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:04:13.840044    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:04:13.840110    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:04:13.852885    8269 logs.go:276] 0 containers: []
	W0429 05:04:13.852898    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:04:13.852947    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:04:13.863683    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:04:13.863701    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:04:13.863706    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:04:13.868592    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:04:13.868600    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:04:13.882710    8269 logs.go:123] Gathering logs for coredns [279483b19c81] ...
	I0429 05:04:13.882720    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483b19c81"
	I0429 05:04:13.894048    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:04:13.894064    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:04:13.905961    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:04:13.905971    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:04:13.923669    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:04:13.923762    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:04:13.940212    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:04:13.940219    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:04:13.976134    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:04:13.976148    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:04:13.998968    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:04:13.998979    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:04:14.013661    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:04:14.013676    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:04:14.031998    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:04:14.032008    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:04:14.043150    8269 logs.go:123] Gathering logs for coredns [d67d6506457f] ...
	I0429 05:04:14.043160    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67d6506457f"
	I0429 05:04:14.054649    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:04:14.054658    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:04:14.065866    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:04:14.065879    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:04:14.077136    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:04:14.077149    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:04:14.088771    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:04:14.088782    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:04:14.112535    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:04:14.112545    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:04:14.112569    8269 out.go:239] X Problems detected in kubelet:
	W0429 05:04:14.112573    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:04:14.112577    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:04:14.112581    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:04:14.112584    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:04:18.091310    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:18.091380    8430 kubeadm.go:591] duration metric: took 4m3.607182375s to restartPrimaryControlPlane
	W0429 05:04:18.091443    8430 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 05:04:18.091471    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0429 05:04:19.182785    8430 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.091305833s)
	I0429 05:04:19.182854    8430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 05:04:19.188003    8430 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 05:04:19.190940    8430 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 05:04:19.193756    8430 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 05:04:19.193763    8430 kubeadm.go:156] found existing configuration files:
	
	I0429 05:04:19.193787    8430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/admin.conf
	I0429 05:04:19.196245    8430 kubeadm.go:162] "https://control-plane.minikube.internal:51384" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 05:04:19.196264    8430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 05:04:19.199291    8430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/kubelet.conf
	I0429 05:04:19.202229    8430 kubeadm.go:162] "https://control-plane.minikube.internal:51384" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 05:04:19.202246    8430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 05:04:19.204780    8430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/controller-manager.conf
	I0429 05:04:19.207405    8430 kubeadm.go:162] "https://control-plane.minikube.internal:51384" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 05:04:19.207430    8430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 05:04:19.210278    8430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/scheduler.conf
	I0429 05:04:19.212740    8430 kubeadm.go:162] "https://control-plane.minikube.internal:51384" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 05:04:19.213307    8430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 05:04:19.215880    8430 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 05:04:19.232859    8430 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0429 05:04:19.232893    8430 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 05:04:19.282442    8430 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 05:04:19.282502    8430 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 05:04:19.282543    8430 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 05:04:19.330829    8430 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 05:04:19.334197    8430 out.go:204]   - Generating certificates and keys ...
	I0429 05:04:19.334230    8430 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 05:04:19.334261    8430 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 05:04:19.334306    8430 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 05:04:19.334342    8430 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 05:04:19.334378    8430 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 05:04:19.334404    8430 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 05:04:19.334436    8430 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 05:04:19.334522    8430 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 05:04:19.334560    8430 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 05:04:19.334595    8430 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 05:04:19.334626    8430 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 05:04:19.334657    8430 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 05:04:19.482917    8430 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 05:04:19.521166    8430 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 05:04:19.673660    8430 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 05:04:20.009456    8430 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 05:04:20.040905    8430 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 05:04:20.041292    8430 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 05:04:20.041314    8430 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 05:04:20.112708    8430 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 05:04:20.116003    8430 out.go:204]   - Booting up control plane ...
	I0429 05:04:20.116052    8430 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 05:04:20.116143    8430 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 05:04:20.116360    8430 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 05:04:20.116419    8430 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 05:04:20.116527    8430 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 05:04:24.612586    8430 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501802 seconds
	I0429 05:04:24.612645    8430 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 05:04:24.616517    8430 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 05:04:25.135236    8430 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 05:04:25.135476    8430 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-383000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 05:04:25.639369    8430 kubeadm.go:309] [bootstrap-token] Using token: xmutsd.mvjfrqnk9xs5g1vn
	I0429 05:04:25.644519    8430 out.go:204]   - Configuring RBAC rules ...
	I0429 05:04:25.644581    8430 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 05:04:25.644627    8430 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 05:04:25.648844    8430 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 05:04:25.649752    8430 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 05:04:25.650562    8430 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 05:04:25.651359    8430 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 05:04:25.654526    8430 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 05:04:25.825235    8430 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 05:04:26.043030    8430 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 05:04:26.043548    8430 kubeadm.go:309] 
	I0429 05:04:26.043582    8430 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 05:04:26.043585    8430 kubeadm.go:309] 
	I0429 05:04:26.043622    8430 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 05:04:26.043625    8430 kubeadm.go:309] 
	I0429 05:04:26.043649    8430 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 05:04:26.043686    8430 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 05:04:26.043723    8430 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 05:04:26.043726    8430 kubeadm.go:309] 
	I0429 05:04:26.043753    8430 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 05:04:26.043755    8430 kubeadm.go:309] 
	I0429 05:04:26.043785    8430 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 05:04:26.043788    8430 kubeadm.go:309] 
	I0429 05:04:26.043814    8430 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 05:04:26.043850    8430 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 05:04:26.043889    8430 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 05:04:26.043892    8430 kubeadm.go:309] 
	I0429 05:04:26.043935    8430 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 05:04:26.043970    8430 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 05:04:26.043975    8430 kubeadm.go:309] 
	I0429 05:04:26.044023    8430 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token xmutsd.mvjfrqnk9xs5g1vn \
	I0429 05:04:26.044073    8430 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4832dc51ff6d0e6d2b485eb727ddc01b0033877744e5e13a6c0f8b67a1b7145 \
	I0429 05:04:26.044084    8430 kubeadm.go:309] 	--control-plane 
	I0429 05:04:26.044090    8430 kubeadm.go:309] 
	I0429 05:04:26.044138    8430 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 05:04:26.044144    8430 kubeadm.go:309] 
	I0429 05:04:26.044184    8430 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token xmutsd.mvjfrqnk9xs5g1vn \
	I0429 05:04:26.044234    8430 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4832dc51ff6d0e6d2b485eb727ddc01b0033877744e5e13a6c0f8b67a1b7145 
	I0429 05:04:26.044510    8430 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 05:04:26.044519    8430 cni.go:84] Creating CNI manager for ""
	I0429 05:04:26.044527    8430 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 05:04:26.048502    8430 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 05:04:26.054431    8430 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 05:04:26.057261    8430 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 05:04:26.061740    8430 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 05:04:26.061787    8430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 05:04:26.061811    8430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-383000 minikube.k8s.io/updated_at=2024_04_29T05_04_26_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=94a01df0d4b48636d4af3d06a53be687e06c0844 minikube.k8s.io/name=stopped-upgrade-383000 minikube.k8s.io/primary=true
	I0429 05:04:26.104489    8430 kubeadm.go:1107] duration metric: took 42.736083ms to wait for elevateKubeSystemPrivileges
	I0429 05:04:26.104505    8430 ops.go:34] apiserver oom_adj: -16
	W0429 05:04:26.104603    8430 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 05:04:26.104608    8430 kubeadm.go:393] duration metric: took 4m11.633794s to StartCluster
	I0429 05:04:26.104617    8430 settings.go:142] acquiring lock: {Name:mka93054a23bdbf29aca25affe181be869710883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:04:26.104747    8430 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:04:26.105165    8430 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/kubeconfig: {Name:mkc4105502c44b2331a2dd91226134a74ad93594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:04:26.105398    8430 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:04:26.108470    8430 out.go:177] * Verifying Kubernetes components...
	I0429 05:04:26.105405    8430 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 05:04:26.105479    8430 config.go:182] Loaded profile config "stopped-upgrade-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 05:04:26.116501    8430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 05:04:26.116515    8430 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-383000"
	I0429 05:04:26.116520    8430 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-383000"
	I0429 05:04:26.116528    8430 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-383000"
	W0429 05:04:26.116531    8430 addons.go:243] addon storage-provisioner should already be in state true
	I0429 05:04:26.116531    8430 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-383000"
	I0429 05:04:26.116542    8430 host.go:66] Checking if "stopped-upgrade-383000" exists ...
	I0429 05:04:26.117803    8430 kapi.go:59] client config for stopped-upgrade-383000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/client.key", CAFile:"/Users/jenkins/minikube-integration/18771-6092/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10184fcb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 05:04:26.117926    8430 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-383000"
	W0429 05:04:26.117931    8430 addons.go:243] addon default-storageclass should already be in state true
	I0429 05:04:26.117938    8430 host.go:66] Checking if "stopped-upgrade-383000" exists ...
	I0429 05:04:26.122445    8430 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 05:04:24.116734    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:26.126510    8430 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 05:04:26.126517    8430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 05:04:26.126523    8430 sshutil.go:53] new ssh client: &{IP:localhost Port:51351 SSHKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/stopped-upgrade-383000/id_rsa Username:docker}
	I0429 05:04:26.127336    8430 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 05:04:26.127342    8430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 05:04:26.127346    8430 sshutil.go:53] new ssh client: &{IP:localhost Port:51351 SSHKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/stopped-upgrade-383000/id_rsa Username:docker}
	I0429 05:04:26.194988    8430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 05:04:26.200012    8430 api_server.go:52] waiting for apiserver process to appear ...
	I0429 05:04:26.200054    8430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 05:04:26.204242    8430 api_server.go:72] duration metric: took 98.831083ms to wait for apiserver process to appear ...
	I0429 05:04:26.204249    8430 api_server.go:88] waiting for apiserver healthz status ...
	I0429 05:04:26.204256    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:26.212032    8430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 05:04:26.212462    8430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 05:04:29.118755    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:29.118963    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:04:29.137587    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:04:29.137674    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:04:29.150829    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:04:29.150904    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:04:29.162295    8269 logs.go:276] 4 containers: [d67d6506457f 279483b19c81 8b104f5475d1 cf01071a108e]
	I0429 05:04:29.162370    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:04:29.172546    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:04:29.172614    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:04:29.182984    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:04:29.183053    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:04:29.193429    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:04:29.193490    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:04:29.203860    8269 logs.go:276] 0 containers: []
	W0429 05:04:29.203871    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:04:29.203930    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:04:29.214168    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:04:29.214184    8269 logs.go:123] Gathering logs for coredns [d67d6506457f] ...
	I0429 05:04:29.214189    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67d6506457f"
	I0429 05:04:29.225798    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:04:29.225809    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:04:29.237280    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:04:29.237291    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:04:29.260909    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:04:29.260922    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:04:29.278002    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:04:29.278096    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:04:29.294297    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:04:29.294304    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:04:29.330180    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:04:29.330192    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:04:29.342035    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:04:29.342046    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:04:29.366473    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:04:29.366484    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:04:29.379705    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:04:29.379716    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:04:29.393974    8269 logs.go:123] Gathering logs for coredns [279483b19c81] ...
	I0429 05:04:29.393986    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483b19c81"
	I0429 05:04:29.405481    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:04:29.405496    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:04:29.417643    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:04:29.417655    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:04:29.430197    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:04:29.430205    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:04:29.434833    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:04:29.434843    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:04:29.450760    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:04:29.450774    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:04:29.465944    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:04:29.465954    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:04:29.465980    8269 out.go:239] X Problems detected in kubelet:
	W0429 05:04:29.465984    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:04:29.465988    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:04:29.465992    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:04:29.465994    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:04:31.206426    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:31.206471    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:36.206886    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:36.206928    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:39.466644    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:41.207342    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:41.207374    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:44.468962    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:44.469240    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:04:44.497181    8269 logs.go:276] 1 containers: [9a188a09281c]
	I0429 05:04:44.497301    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:04:44.516425    8269 logs.go:276] 1 containers: [9d800ecb2445]
	I0429 05:04:44.516503    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:04:44.529098    8269 logs.go:276] 4 containers: [d67d6506457f 279483b19c81 8b104f5475d1 cf01071a108e]
	I0429 05:04:44.529182    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:04:44.540554    8269 logs.go:276] 1 containers: [f831da972c69]
	I0429 05:04:44.540617    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:04:44.551032    8269 logs.go:276] 1 containers: [c8d1a7e984cb]
	I0429 05:04:44.551096    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:04:44.560936    8269 logs.go:276] 1 containers: [c5c260f9a149]
	I0429 05:04:44.561007    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:04:44.571141    8269 logs.go:276] 0 containers: []
	W0429 05:04:44.571151    8269 logs.go:278] No container was found matching "kindnet"
	I0429 05:04:44.571207    8269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:04:44.581165    8269 logs.go:276] 1 containers: [497dc39d1f27]
	I0429 05:04:44.581185    8269 logs.go:123] Gathering logs for kube-scheduler [f831da972c69] ...
	I0429 05:04:44.581190    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f831da972c69"
	I0429 05:04:44.596428    8269 logs.go:123] Gathering logs for dmesg ...
	I0429 05:04:44.596439    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:04:44.601175    8269 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:04:44.601185    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:04:44.644077    8269 logs.go:123] Gathering logs for coredns [8b104f5475d1] ...
	I0429 05:04:44.644090    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b104f5475d1"
	I0429 05:04:44.656576    8269 logs.go:123] Gathering logs for kube-proxy [c8d1a7e984cb] ...
	I0429 05:04:44.656588    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d1a7e984cb"
	I0429 05:04:44.667986    8269 logs.go:123] Gathering logs for kube-apiserver [9a188a09281c] ...
	I0429 05:04:44.667995    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a188a09281c"
	I0429 05:04:44.682992    8269 logs.go:123] Gathering logs for coredns [279483b19c81] ...
	I0429 05:04:44.683005    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483b19c81"
	I0429 05:04:44.694150    8269 logs.go:123] Gathering logs for kubelet ...
	I0429 05:04:44.694161    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 05:04:44.712520    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:04:44.712624    8269 logs.go:138] Found kubelet problem: Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:04:44.729678    8269 logs.go:123] Gathering logs for coredns [d67d6506457f] ...
	I0429 05:04:44.729692    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67d6506457f"
	I0429 05:04:44.743040    8269 logs.go:123] Gathering logs for kube-controller-manager [c5c260f9a149] ...
	I0429 05:04:44.743054    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5c260f9a149"
	I0429 05:04:44.760045    8269 logs.go:123] Gathering logs for storage-provisioner [497dc39d1f27] ...
	I0429 05:04:44.760059    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497dc39d1f27"
	I0429 05:04:44.771910    8269 logs.go:123] Gathering logs for Docker ...
	I0429 05:04:44.771920    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:04:44.796280    8269 logs.go:123] Gathering logs for container status ...
	I0429 05:04:44.796287    8269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:04:44.808629    8269 logs.go:123] Gathering logs for etcd [9d800ecb2445] ...
	I0429 05:04:44.808641    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d800ecb2445"
	I0429 05:04:44.822790    8269 logs.go:123] Gathering logs for coredns [cf01071a108e] ...
	I0429 05:04:44.822801    8269 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf01071a108e"
	I0429 05:04:44.834870    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:04:44.834880    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 05:04:44.834908    8269 out.go:239] X Problems detected in kubelet:
	W0429 05:04:44.834913    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: W0429 11:56:58.766568    3445 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	W0429 05:04:44.834916    8269 out.go:239]   Apr 29 11:56:58 running-upgrade-310000 kubelet[3445]: E0429 11:56:58.766609    3445 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-310000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-310000' and this object
	I0429 05:04:44.834920    8269 out.go:304] Setting ErrFile to fd 2...
	I0429 05:04:44.834922    8269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:04:46.207900    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:46.207941    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:51.208813    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:51.208859    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:56.209865    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:56.209909    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0429 05:04:56.572041    8430 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0429 05:04:56.575696    8430 out.go:177] * Enabled addons: storage-provisioner
	I0429 05:04:54.837922    8269 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:56.582587    8430 addons.go:505] duration metric: took 30.4772485s for enable addons: enabled=[storage-provisioner]
	I0429 05:04:59.840341    8269 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:59.844943    8269 out.go:177] 
	W0429 05:04:59.850789    8269 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0429 05:04:59.850799    8269 out.go:239] * 
	W0429 05:04:59.851414    8269 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:04:59.862866    8269 out.go:177] 
	I0429 05:05:01.211064    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:05:01.211109    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:05:06.212868    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:05:06.212914    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:05:11.214812    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:05:11.214844    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-04-29 11:55:56 UTC, ends at Mon 2024-04-29 12:05:15 UTC. --
	Apr 29 12:04:56 running-upgrade-310000 dockerd[2874]: time="2024-04-29T12:04:56.616640790Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c6d443f85f8ecb30f6634a41d28a763210401004a0d812a7d19716ad897469c8 pid=15594 runtime=io.containerd.runc.v2
	Apr 29 12:04:56 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:04:56Z" level=error msg="ContainerStats resp: {0x40005d8280 linux}"
	Apr 29 12:04:56 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:04:56Z" level=error msg="ContainerStats resp: {0x4000653d80 linux}"
	Apr 29 12:04:57 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:04:57Z" level=error msg="ContainerStats resp: {0x400098a180 linux}"
	Apr 29 12:04:57 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:04:57Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Apr 29 12:04:58 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:04:58Z" level=error msg="ContainerStats resp: {0x400098adc0 linux}"
	Apr 29 12:04:58 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:04:58Z" level=error msg="ContainerStats resp: {0x400098ae80 linux}"
	Apr 29 12:04:58 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:04:58Z" level=error msg="ContainerStats resp: {0x400098b780 linux}"
	Apr 29 12:04:58 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:04:58Z" level=error msg="ContainerStats resp: {0x40004fd280 linux}"
	Apr 29 12:04:58 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:04:58Z" level=error msg="ContainerStats resp: {0x40005920c0 linux}"
	Apr 29 12:04:58 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:04:58Z" level=error msg="ContainerStats resp: {0x4000592280 linux}"
	Apr 29 12:04:58 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:04:58Z" level=error msg="ContainerStats resp: {0x40001a7600 linux}"
	Apr 29 12:05:02 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:05:02Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Apr 29 12:05:07 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:05:07Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Apr 29 12:05:08 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:05:08Z" level=error msg="ContainerStats resp: {0x40007949c0 linux}"
	Apr 29 12:05:08 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:05:08Z" level=error msg="ContainerStats resp: {0x4000794e40 linux}"
	Apr 29 12:05:09 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:05:09Z" level=error msg="ContainerStats resp: {0x4000553580 linux}"
	Apr 29 12:05:10 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:05:10Z" level=error msg="ContainerStats resp: {0x400098a680 linux}"
	Apr 29 12:05:10 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:05:10Z" level=error msg="ContainerStats resp: {0x400098a500 linux}"
	Apr 29 12:05:10 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:05:10Z" level=error msg="ContainerStats resp: {0x400098aac0 linux}"
	Apr 29 12:05:10 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:05:10Z" level=error msg="ContainerStats resp: {0x40004fcd00 linux}"
	Apr 29 12:05:10 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:05:10Z" level=error msg="ContainerStats resp: {0x40004fd180 linux}"
	Apr 29 12:05:10 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:05:10Z" level=error msg="ContainerStats resp: {0x40004fd9c0 linux}"
	Apr 29 12:05:10 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:05:10Z" level=error msg="ContainerStats resp: {0x400098b900 linux}"
	Apr 29 12:05:12 running-upgrade-310000 cri-dockerd[2718]: time="2024-04-29T12:05:12Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	c6d443f85f8ec       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   ca2f1f6ba1d2d
	6a1f689846fe9       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   9c1f743e93594
	d67d6506457f1       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   ca2f1f6ba1d2d
	279483b19c81f       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   9c1f743e93594
	c8d1a7e984cba       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   aa576ce7097f1
	497dc39d1f27d       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   9fd8968cb85de
	c5c260f9a1497       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   0a460ca97912c
	9d800ecb24454       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   430ca31f33e63
	f831da972c698       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   292f100cee7b0
	9a188a09281c5       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   3bd43be6a9fbf
	
	
	==> coredns [279483b19c81] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1707166881416912750.6615435074297805013. HINFO: read udp 10.244.0.2:44624->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1707166881416912750.6615435074297805013. HINFO: read udp 10.244.0.2:44144->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1707166881416912750.6615435074297805013. HINFO: read udp 10.244.0.2:38759->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1707166881416912750.6615435074297805013. HINFO: read udp 10.244.0.2:56685->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6a1f689846fe] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6218741528810862729.7167785102189780768. HINFO: read udp 10.244.0.2:58288->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6218741528810862729.7167785102189780768. HINFO: read udp 10.244.0.2:59692->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6218741528810862729.7167785102189780768. HINFO: read udp 10.244.0.2:44075->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6218741528810862729.7167785102189780768. HINFO: read udp 10.244.0.2:42658->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6218741528810862729.7167785102189780768. HINFO: read udp 10.244.0.2:47665->10.0.2.3:53: i/o timeout
	
	
	==> coredns [c6d443f85f8e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8520219591818499360.2476663112151228718. HINFO: read udp 10.244.0.3:50296->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8520219591818499360.2476663112151228718. HINFO: read udp 10.244.0.3:60673->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8520219591818499360.2476663112151228718. HINFO: read udp 10.244.0.3:49382->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8520219591818499360.2476663112151228718. HINFO: read udp 10.244.0.3:38705->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8520219591818499360.2476663112151228718. HINFO: read udp 10.244.0.3:54564->10.0.2.3:53: i/o timeout
	
	
	==> coredns [d67d6506457f] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4594771904543406256.3148605556979605330. HINFO: read udp 10.244.0.3:41859->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4594771904543406256.3148605556979605330. HINFO: read udp 10.244.0.3:45560->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4594771904543406256.3148605556979605330. HINFO: read udp 10.244.0.3:34286->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4594771904543406256.3148605556979605330. HINFO: read udp 10.244.0.3:48276->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4594771904543406256.3148605556979605330. HINFO: read udp 10.244.0.3:34858->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4594771904543406256.3148605556979605330. HINFO: read udp 10.244.0.3:54861->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4594771904543406256.3148605556979605330. HINFO: read udp 10.244.0.3:39229->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4594771904543406256.3148605556979605330. HINFO: read udp 10.244.0.3:60649->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4594771904543406256.3148605556979605330. HINFO: read udp 10.244.0.3:56358->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4594771904543406256.3148605556979605330. HINFO: read udp 10.244.0.3:40014->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-310000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-310000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=94a01df0d4b48636d4af3d06a53be687e06c0844
	                    minikube.k8s.io/name=running-upgrade-310000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T05_00_55_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:00:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-310000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 12:05:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 12:00:55 +0000   Mon, 29 Apr 2024 12:00:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 12:00:55 +0000   Mon, 29 Apr 2024 12:00:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 12:00:55 +0000   Mon, 29 Apr 2024 12:00:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 12:00:55 +0000   Mon, 29 Apr 2024 12:00:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-310000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 09ebb9d5685744fbb8f3238bfaf2a09b
	  System UUID:                09ebb9d5685744fbb8f3238bfaf2a09b
	  Boot ID:                    ad068707-30b2-4533-be10-89eb487461ac
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-clw5q                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m8s
	  kube-system                 coredns-6d4b75cb6d-w8x7w                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m8s
	  kube-system                 etcd-running-upgrade-310000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m23s
	  kube-system                 kube-apiserver-running-upgrade-310000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-controller-manager-running-upgrade-310000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-proxy-f4dll                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-scheduler-running-upgrade-310000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m6s   kube-proxy       
	  Normal  NodeReady                4m21s  kubelet          Node running-upgrade-310000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m21s  kubelet          Node running-upgrade-310000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s  kubelet          Node running-upgrade-310000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s  kubelet          Node running-upgrade-310000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m21s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m8s   node-controller  Node running-upgrade-310000 event: Registered Node running-upgrade-310000 in Controller
	
	
	==> dmesg <==
	[  +1.896169] systemd-fstab-generator[879]: Ignoring "noauto" for root device
	[  +0.067982] systemd-fstab-generator[890]: Ignoring "noauto" for root device
	[  +0.083161] systemd-fstab-generator[901]: Ignoring "noauto" for root device
	[  +1.141029] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.079446] systemd-fstab-generator[1051]: Ignoring "noauto" for root device
	[  +0.069140] systemd-fstab-generator[1062]: Ignoring "noauto" for root device
	[  +2.440342] systemd-fstab-generator[1288]: Ignoring "noauto" for root device
	[ +14.155873] systemd-fstab-generator[1966]: Ignoring "noauto" for root device
	[  +2.694695] systemd-fstab-generator[2245]: Ignoring "noauto" for root device
	[  +0.143628] systemd-fstab-generator[2279]: Ignoring "noauto" for root device
	[  +0.090450] systemd-fstab-generator[2290]: Ignoring "noauto" for root device
	[  +0.093958] systemd-fstab-generator[2303]: Ignoring "noauto" for root device
	[  +1.339313] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.116292] systemd-fstab-generator[2675]: Ignoring "noauto" for root device
	[  +0.077443] systemd-fstab-generator[2686]: Ignoring "noauto" for root device
	[  +0.074337] systemd-fstab-generator[2697]: Ignoring "noauto" for root device
	[  +0.089989] systemd-fstab-generator[2711]: Ignoring "noauto" for root device
	[  +2.203337] systemd-fstab-generator[2861]: Ignoring "noauto" for root device
	[  +4.428495] systemd-fstab-generator[3235]: Ignoring "noauto" for root device
	[  +1.460719] systemd-fstab-generator[3439]: Ignoring "noauto" for root device
	[ +19.447473] kauditd_printk_skb: 68 callbacks suppressed
	[Apr29 11:57] kauditd_printk_skb: 23 callbacks suppressed
	[Apr29 12:00] systemd-fstab-generator[10080]: Ignoring "noauto" for root device
	[  +5.632626] systemd-fstab-generator[10667]: Ignoring "noauto" for root device
	[  +0.486635] systemd-fstab-generator[10803]: Ignoring "noauto" for root device
	
	
	==> etcd [9d800ecb2445] <==
	{"level":"info","ts":"2024-04-29T12:00:50.559Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-04-29T12:00:50.559Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T12:00:50.559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-04-29T12:00:50.559Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-04-29T12:00:50.559Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-04-29T12:00:50.559Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-04-29T12:00:50.559Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T12:00:51.239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-29T12:00:51.239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-29T12:00:51.239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-04-29T12:00:51.239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-04-29T12:00:51.239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-04-29T12:00:51.239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-04-29T12:00:51.239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-04-29T12:00:51.240Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-310000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T12:00:51.240Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T12:00:51.240Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T12:00:51.240Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T12:00:51.241Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T12:00:51.241Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T12:00:51.241Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T12:00:51.241Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T12:00:51.241Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-04-29T12:00:51.243Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T12:00:51.243Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 12:05:16 up 9 min,  0 users,  load average: 0.21, 0.34, 0.19
	Linux running-upgrade-310000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [9a188a09281c] <==
	I0429 12:00:52.510965       1 cache.go:39] Caches are synced for autoregister controller
	I0429 12:00:52.511476       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0429 12:00:52.518092       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0429 12:00:52.520083       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 12:00:52.520440       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 12:00:52.521579       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0429 12:00:52.551955       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0429 12:00:53.243430       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0429 12:00:53.424318       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0429 12:00:53.429075       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0429 12:00:53.429098       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 12:00:53.564229       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 12:00:53.576725       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 12:00:53.679270       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0429 12:00:53.681394       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0429 12:00:53.681782       1 controller.go:611] quota admission added evaluator for: endpoints
	I0429 12:00:53.683362       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 12:00:54.548734       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0429 12:00:55.017266       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0429 12:00:55.020893       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0429 12:00:55.047880       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0429 12:00:55.073391       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 12:01:08.403792       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0429 12:01:08.453508       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0429 12:01:10.160789       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [c5c260f9a149] <==
	I0429 12:01:08.297503       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0429 12:01:08.298602       1 shared_informer.go:262] Caches are synced for endpoint
	I0429 12:01:08.298620       1 shared_informer.go:262] Caches are synced for job
	I0429 12:01:08.298636       1 shared_informer.go:262] Caches are synced for ephemeral
	I0429 12:01:08.298641       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0429 12:01:08.299439       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0429 12:01:08.300251       1 shared_informer.go:262] Caches are synced for attach detach
	I0429 12:01:08.302005       1 shared_informer.go:262] Caches are synced for PVC protection
	I0429 12:01:08.304103       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0429 12:01:08.304388       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0429 12:01:08.374288       1 shared_informer.go:262] Caches are synced for stateful set
	I0429 12:01:08.398784       1 shared_informer.go:262] Caches are synced for daemon sets
	I0429 12:01:08.398785       1 shared_informer.go:262] Caches are synced for persistent volume
	I0429 12:01:08.407742       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-f4dll"
	I0429 12:01:08.449565       1 shared_informer.go:262] Caches are synced for deployment
	I0429 12:01:08.454880       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0429 12:01:08.463021       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-w8x7w"
	I0429 12:01:08.465446       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-clw5q"
	I0429 12:01:08.468988       1 shared_informer.go:262] Caches are synced for resource quota
	I0429 12:01:08.498722       1 shared_informer.go:262] Caches are synced for disruption
	I0429 12:01:08.498798       1 disruption.go:371] Sending events to api server.
	I0429 12:01:08.506122       1 shared_informer.go:262] Caches are synced for resource quota
	I0429 12:01:08.925894       1 shared_informer.go:262] Caches are synced for garbage collector
	I0429 12:01:08.949269       1 shared_informer.go:262] Caches are synced for garbage collector
	I0429 12:01:08.949345       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [c8d1a7e984cb] <==
	I0429 12:01:10.146367       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0429 12:01:10.146392       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0429 12:01:10.146402       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0429 12:01:10.159193       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0429 12:01:10.159207       1 server_others.go:206] "Using iptables Proxier"
	I0429 12:01:10.159220       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0429 12:01:10.159317       1 server.go:661] "Version info" version="v1.24.1"
	I0429 12:01:10.159322       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 12:01:10.159547       1 config.go:317] "Starting service config controller"
	I0429 12:01:10.159551       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0429 12:01:10.159559       1 config.go:226] "Starting endpoint slice config controller"
	I0429 12:01:10.159561       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0429 12:01:10.159834       1 config.go:444] "Starting node config controller"
	I0429 12:01:10.159836       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0429 12:01:10.260404       1 shared_informer.go:262] Caches are synced for node config
	I0429 12:01:10.260422       1 shared_informer.go:262] Caches are synced for service config
	I0429 12:01:10.260424       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f831da972c69] <==
	W0429 12:00:52.461228       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 12:00:52.464134       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 12:00:52.461330       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 12:00:52.461341       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 12:00:52.461354       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 12:00:52.461366       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 12:00:52.461412       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 12:00:52.464238       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 12:00:52.464241       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 12:00:52.464243       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 12:00:52.464386       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 12:00:52.464390       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 12:00:53.302821       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 12:00:53.302877       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 12:00:53.305965       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 12:00:53.306151       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 12:00:53.326007       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 12:00:53.326042       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 12:00:53.342240       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 12:00:53.342432       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 12:00:53.360626       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 12:00:53.360722       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 12:00:53.442657       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 12:00:53.442719       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0429 12:00:55.757904       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-04-29 11:55:56 UTC, ends at Mon 2024-04-29 12:05:16 UTC. --
	Apr 29 12:00:56 running-upgrade-310000 kubelet[10673]: I0429 12:00:56.478521   10673 reconciler.go:157] "Reconciler: start to sync state"
	Apr 29 12:00:56 running-upgrade-310000 kubelet[10673]: E0429 12:00:56.654525   10673 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-310000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-310000"
	Apr 29 12:00:56 running-upgrade-310000 kubelet[10673]: E0429 12:00:56.856292   10673 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-310000\" already exists" pod="kube-system/etcd-running-upgrade-310000"
	Apr 29 12:00:57 running-upgrade-310000 kubelet[10673]: E0429 12:00:57.057483   10673 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-310000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-310000"
	Apr 29 12:00:57 running-upgrade-310000 kubelet[10673]: I0429 12:00:57.253243   10673 request.go:601] Waited for 1.129814659s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Apr 29 12:00:57 running-upgrade-310000 kubelet[10673]: E0429 12:00:57.255765   10673 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-310000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-310000"
	Apr 29 12:01:08 running-upgrade-310000 kubelet[10673]: I0429 12:01:08.285256   10673 topology_manager.go:200] "Topology Admit Handler"
	Apr 29 12:01:08 running-upgrade-310000 kubelet[10673]: I0429 12:01:08.372340   10673 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 29 12:01:08 running-upgrade-310000 kubelet[10673]: I0429 12:01:08.372738   10673 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 29 12:01:08 running-upgrade-310000 kubelet[10673]: I0429 12:01:08.410005   10673 topology_manager.go:200] "Topology Admit Handler"
	Apr 29 12:01:08 running-upgrade-310000 kubelet[10673]: I0429 12:01:08.465538   10673 topology_manager.go:200] "Topology Admit Handler"
	Apr 29 12:01:08 running-upgrade-310000 kubelet[10673]: I0429 12:01:08.472258   10673 topology_manager.go:200] "Topology Admit Handler"
	Apr 29 12:01:08 running-upgrade-310000 kubelet[10673]: I0429 12:01:08.474432   10673 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/18f71a7f-01b0-44bd-b925-62d47dfb440c-tmp\") pod \"storage-provisioner\" (UID: \"18f71a7f-01b0-44bd-b925-62d47dfb440c\") " pod="kube-system/storage-provisioner"
	Apr 29 12:01:08 running-upgrade-310000 kubelet[10673]: I0429 12:01:08.474622   10673 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stpk4\" (UniqueName: \"kubernetes.io/projected/18f71a7f-01b0-44bd-b925-62d47dfb440c-kube-api-access-stpk4\") pod \"storage-provisioner\" (UID: \"18f71a7f-01b0-44bd-b925-62d47dfb440c\") " pod="kube-system/storage-provisioner"
	Apr 29 12:01:08 running-upgrade-310000 kubelet[10673]: I0429 12:01:08.574882   10673 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/55e294d3-185c-48a9-86c1-1eed5ba2d6c4-kube-proxy\") pod \"kube-proxy-f4dll\" (UID: \"55e294d3-185c-48a9-86c1-1eed5ba2d6c4\") " pod="kube-system/kube-proxy-f4dll"
	Apr 29 12:01:08 running-upgrade-310000 kubelet[10673]: I0429 12:01:08.574907   10673 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55e294d3-185c-48a9-86c1-1eed5ba2d6c4-lib-modules\") pod \"kube-proxy-f4dll\" (UID: \"55e294d3-185c-48a9-86c1-1eed5ba2d6c4\") " pod="kube-system/kube-proxy-f4dll"
	Apr 29 12:01:08 running-upgrade-310000 kubelet[10673]: I0429 12:01:08.574917   10673 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04ec53c3-993a-461a-984f-451360de2aff-config-volume\") pod \"coredns-6d4b75cb6d-w8x7w\" (UID: \"04ec53c3-993a-461a-984f-451360de2aff\") " pod="kube-system/coredns-6d4b75cb6d-w8x7w"
	Apr 29 12:01:08 running-upgrade-310000 kubelet[10673]: I0429 12:01:08.574931   10673 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vtxt\" (UniqueName: \"kubernetes.io/projected/92cb5749-368b-4c4a-9f2e-8f4d085f2a00-kube-api-access-6vtxt\") pod \"coredns-6d4b75cb6d-clw5q\" (UID: \"92cb5749-368b-4c4a-9f2e-8f4d085f2a00\") " pod="kube-system/coredns-6d4b75cb6d-clw5q"
	Apr 29 12:01:08 running-upgrade-310000 kubelet[10673]: I0429 12:01:08.574942   10673 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmmms\" (UniqueName: \"kubernetes.io/projected/55e294d3-185c-48a9-86c1-1eed5ba2d6c4-kube-api-access-gmmms\") pod \"kube-proxy-f4dll\" (UID: \"55e294d3-185c-48a9-86c1-1eed5ba2d6c4\") " pod="kube-system/kube-proxy-f4dll"
	Apr 29 12:01:08 running-upgrade-310000 kubelet[10673]: I0429 12:01:08.574952   10673 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6zbp\" (UniqueName: \"kubernetes.io/projected/04ec53c3-993a-461a-984f-451360de2aff-kube-api-access-n6zbp\") pod \"coredns-6d4b75cb6d-w8x7w\" (UID: \"04ec53c3-993a-461a-984f-451360de2aff\") " pod="kube-system/coredns-6d4b75cb6d-w8x7w"
	Apr 29 12:01:08 running-upgrade-310000 kubelet[10673]: I0429 12:01:08.574962   10673 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55e294d3-185c-48a9-86c1-1eed5ba2d6c4-xtables-lock\") pod \"kube-proxy-f4dll\" (UID: \"55e294d3-185c-48a9-86c1-1eed5ba2d6c4\") " pod="kube-system/kube-proxy-f4dll"
	Apr 29 12:01:08 running-upgrade-310000 kubelet[10673]: I0429 12:01:08.574977   10673 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92cb5749-368b-4c4a-9f2e-8f4d085f2a00-config-volume\") pod \"coredns-6d4b75cb6d-clw5q\" (UID: \"92cb5749-368b-4c4a-9f2e-8f4d085f2a00\") " pod="kube-system/coredns-6d4b75cb6d-clw5q"
	Apr 29 12:01:09 running-upgrade-310000 kubelet[10673]: I0429 12:01:09.804102   10673 request.go:601] Waited for 1.128415919s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token
	Apr 29 12:04:56 running-upgrade-310000 kubelet[10673]: I0429 12:04:56.726203   10673 scope.go:110] "RemoveContainer" containerID="8b104f5475d1a7ff7affd7ba9b8e6969563c167c9df2fe21d58559d375051269"
	Apr 29 12:04:56 running-upgrade-310000 kubelet[10673]: I0429 12:04:56.742288   10673 scope.go:110] "RemoveContainer" containerID="cf01071a108ef88bcf7bb42e8644776141a47040365a45fc156e88a4168467f7"
	
	
	==> storage-provisioner [497dc39d1f27] <==
	I0429 12:01:09.373913       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 12:01:09.378083       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 12:01:09.378118       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 12:01:09.381586       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 12:01:09.381705       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-310000_24025d5d-2498-41a7-a934-126c44c8d18b!
	I0429 12:01:09.382371       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"141aaa86-403e-462f-a89f-41151219f665", APIVersion:"v1", ResourceVersion:"356", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-310000_24025d5d-2498-41a7-a934-126c44c8d18b became leader
	I0429 12:01:09.482361       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-310000_24025d5d-2498-41a7-a934-126c44c8d18b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-310000 -n running-upgrade-310000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-310000 -n running-upgrade-310000: exit status 2 (15.589143709s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-310000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-310000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-310000
--- FAIL: TestRunningBinaryUpgrade (604.89s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-894000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-894000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.784544125s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-894000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-894000" primary control-plane node in "kubernetes-upgrade-894000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-894000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:58:27.564219    8356 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:58:27.564357    8356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:58:27.564362    8356 out.go:304] Setting ErrFile to fd 2...
	I0429 04:58:27.564364    8356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:58:27.564493    8356 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:58:27.565663    8356 out.go:298] Setting JSON to false
	I0429 04:58:27.582070    8356 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5278,"bootTime":1714386629,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:58:27.582136    8356 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:58:27.585853    8356 out.go:177] * [kubernetes-upgrade-894000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:58:27.592832    8356 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 04:58:27.592919    8356 notify.go:220] Checking for updates...
	I0429 04:58:27.596737    8356 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:58:27.599797    8356 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:58:27.602978    8356 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:58:27.605767    8356 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 04:58:27.608748    8356 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:58:27.612179    8356 config.go:182] Loaded profile config "multinode-368000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:58:27.612254    8356 config.go:182] Loaded profile config "running-upgrade-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 04:58:27.612303    8356 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:58:27.615810    8356 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 04:58:27.622810    8356 start.go:297] selected driver: qemu2
	I0429 04:58:27.622816    8356 start.go:901] validating driver "qemu2" against <nil>
	I0429 04:58:27.622822    8356 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:58:27.625078    8356 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 04:58:27.626327    8356 out.go:177] * Automatically selected the socket_vmnet network
	I0429 04:58:27.628820    8356 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 04:58:27.628844    8356 cni.go:84] Creating CNI manager for ""
	I0429 04:58:27.628853    8356 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0429 04:58:27.628881    8356 start.go:340] cluster config:
	{Name:kubernetes-upgrade-894000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-894000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:58:27.633165    8356 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:58:27.642769    8356 out.go:177] * Starting "kubernetes-upgrade-894000" primary control-plane node in "kubernetes-upgrade-894000" cluster
	I0429 04:58:27.646787    8356 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 04:58:27.646807    8356 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0429 04:58:27.646816    8356 cache.go:56] Caching tarball of preloaded images
	I0429 04:58:27.646904    8356 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 04:58:27.646909    8356 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0429 04:58:27.646961    8356 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/kubernetes-upgrade-894000/config.json ...
	I0429 04:58:27.646975    8356 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/kubernetes-upgrade-894000/config.json: {Name:mk6290e5d564a674f13df45f0287625baf56c6ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 04:58:27.647326    8356 start.go:360] acquireMachinesLock for kubernetes-upgrade-894000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:58:27.647357    8356 start.go:364] duration metric: took 24.5µs to acquireMachinesLock for "kubernetes-upgrade-894000"
	I0429 04:58:27.647367    8356 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-894000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 04:58:27.647399    8356 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 04:58:27.651658    8356 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 04:58:27.677619    8356 start.go:159] libmachine.API.Create for "kubernetes-upgrade-894000" (driver="qemu2")
	I0429 04:58:27.677646    8356 client.go:168] LocalClient.Create starting
	I0429 04:58:27.677716    8356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 04:58:27.677749    8356 main.go:141] libmachine: Decoding PEM data...
	I0429 04:58:27.677762    8356 main.go:141] libmachine: Parsing certificate...
	I0429 04:58:27.677798    8356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 04:58:27.677820    8356 main.go:141] libmachine: Decoding PEM data...
	I0429 04:58:27.677827    8356 main.go:141] libmachine: Parsing certificate...
	I0429 04:58:27.678184    8356 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 04:58:27.826359    8356 main.go:141] libmachine: Creating SSH key...
	I0429 04:58:27.928209    8356 main.go:141] libmachine: Creating Disk image...
	I0429 04:58:27.928220    8356 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 04:58:27.928416    8356 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/disk.qcow2
	I0429 04:58:27.941803    8356 main.go:141] libmachine: STDOUT: 
	I0429 04:58:27.941830    8356 main.go:141] libmachine: STDERR: 
	I0429 04:58:27.941898    8356 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/disk.qcow2 +20000M
	I0429 04:58:27.953013    8356 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 04:58:27.953037    8356 main.go:141] libmachine: STDERR: 
	I0429 04:58:27.953057    8356 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/disk.qcow2
	I0429 04:58:27.953062    8356 main.go:141] libmachine: Starting QEMU VM...
	I0429 04:58:27.953103    8356 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:7f:ed:15:4a:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/disk.qcow2
	I0429 04:58:27.954771    8356 main.go:141] libmachine: STDOUT: 
	I0429 04:58:27.954787    8356 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:58:27.954816    8356 client.go:171] duration metric: took 277.156375ms to LocalClient.Create
	I0429 04:58:29.957049    8356 start.go:128] duration metric: took 2.309618125s to createHost
	I0429 04:58:29.957122    8356 start.go:83] releasing machines lock for "kubernetes-upgrade-894000", held for 2.30975575s
	W0429 04:58:29.957207    8356 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:58:29.968382    8356 out.go:177] * Deleting "kubernetes-upgrade-894000" in qemu2 ...
	W0429 04:58:29.991945    8356 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:58:29.991987    8356 start.go:728] Will try again in 5 seconds ...
	I0429 04:58:34.992729    8356 start.go:360] acquireMachinesLock for kubernetes-upgrade-894000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:58:34.993318    8356 start.go:364] duration metric: took 497.208µs to acquireMachinesLock for "kubernetes-upgrade-894000"
	I0429 04:58:34.993463    8356 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-894000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 04:58:34.993696    8356 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 04:58:35.002316    8356 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 04:58:35.050877    8356 start.go:159] libmachine.API.Create for "kubernetes-upgrade-894000" (driver="qemu2")
	I0429 04:58:35.050933    8356 client.go:168] LocalClient.Create starting
	I0429 04:58:35.051049    8356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 04:58:35.051120    8356 main.go:141] libmachine: Decoding PEM data...
	I0429 04:58:35.051141    8356 main.go:141] libmachine: Parsing certificate...
	I0429 04:58:35.051213    8356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 04:58:35.051256    8356 main.go:141] libmachine: Decoding PEM data...
	I0429 04:58:35.051268    8356 main.go:141] libmachine: Parsing certificate...
	I0429 04:58:35.052147    8356 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 04:58:35.200813    8356 main.go:141] libmachine: Creating SSH key...
	I0429 04:58:35.244896    8356 main.go:141] libmachine: Creating Disk image...
	I0429 04:58:35.244901    8356 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 04:58:35.245087    8356 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/disk.qcow2
	I0429 04:58:35.257767    8356 main.go:141] libmachine: STDOUT: 
	I0429 04:58:35.257788    8356 main.go:141] libmachine: STDERR: 
	I0429 04:58:35.257839    8356 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/disk.qcow2 +20000M
	I0429 04:58:35.268546    8356 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 04:58:35.268563    8356 main.go:141] libmachine: STDERR: 
	I0429 04:58:35.268574    8356 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/disk.qcow2
	I0429 04:58:35.268578    8356 main.go:141] libmachine: Starting QEMU VM...
	I0429 04:58:35.268618    8356 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:71:39:27:cb:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/disk.qcow2
	I0429 04:58:35.270348    8356 main.go:141] libmachine: STDOUT: 
	I0429 04:58:35.270370    8356 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:58:35.270385    8356 client.go:171] duration metric: took 219.448209ms to LocalClient.Create
	I0429 04:58:37.272594    8356 start.go:128] duration metric: took 2.278843167s to createHost
	I0429 04:58:37.272670    8356 start.go:83] releasing machines lock for "kubernetes-upgrade-894000", held for 2.279330375s
	W0429 04:58:37.273118    8356 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-894000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-894000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:58:37.288801    8356 out.go:177] 
	W0429 04:58:37.292870    8356 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:58:37.293047    8356 out.go:239] * 
	* 
	W0429 04:58:37.295765    8356 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 04:58:37.303740    8356 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-894000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-894000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-894000: (2.937765375s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-894000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-894000 status --format={{.Host}}: exit status 7 (52.301666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-894000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-894000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.183145833s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-894000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-894000" primary control-plane node in "kubernetes-upgrade-894000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-894000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-894000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:58:40.342780    8394 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:58:40.342913    8394 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:58:40.342921    8394 out.go:304] Setting ErrFile to fd 2...
	I0429 04:58:40.342923    8394 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:58:40.343050    8394 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:58:40.344117    8394 out.go:298] Setting JSON to false
	I0429 04:58:40.361195    8394 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5291,"bootTime":1714386629,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:58:40.361261    8394 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:58:40.366480    8394 out.go:177] * [kubernetes-upgrade-894000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:58:40.374660    8394 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 04:58:40.374703    8394 notify.go:220] Checking for updates...
	I0429 04:58:40.378549    8394 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:58:40.381598    8394 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:58:40.384616    8394 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:58:40.387497    8394 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 04:58:40.390558    8394 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:58:40.393842    8394 config.go:182] Loaded profile config "kubernetes-upgrade-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0429 04:58:40.394096    8394 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:58:40.398544    8394 out.go:177] * Using the qemu2 driver based on existing profile
	I0429 04:58:40.405595    8394 start.go:297] selected driver: qemu2
	I0429 04:58:40.405606    8394 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-894000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:58:40.405679    8394 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:58:40.407930    8394 cni.go:84] Creating CNI manager for ""
	I0429 04:58:40.407949    8394 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 04:58:40.407978    8394 start.go:340] cluster config:
	{Name:kubernetes-upgrade-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-894000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:58:40.411976    8394 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:58:40.419579    8394 out.go:177] * Starting "kubernetes-upgrade-894000" primary control-plane node in "kubernetes-upgrade-894000" cluster
	I0429 04:58:40.423550    8394 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 04:58:40.423563    8394 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 04:58:40.423568    8394 cache.go:56] Caching tarball of preloaded images
	I0429 04:58:40.423616    8394 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 04:58:40.423621    8394 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 04:58:40.423668    8394 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/kubernetes-upgrade-894000/config.json ...
	I0429 04:58:40.424149    8394 start.go:360] acquireMachinesLock for kubernetes-upgrade-894000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:58:40.424180    8394 start.go:364] duration metric: took 25.208µs to acquireMachinesLock for "kubernetes-upgrade-894000"
	I0429 04:58:40.424188    8394 start.go:96] Skipping create...Using existing machine configuration
	I0429 04:58:40.424194    8394 fix.go:54] fixHost starting: 
	I0429 04:58:40.424303    8394 fix.go:112] recreateIfNeeded on kubernetes-upgrade-894000: state=Stopped err=<nil>
	W0429 04:58:40.424310    8394 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 04:58:40.432568    8394 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-894000" ...
	I0429 04:58:40.436585    8394 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:71:39:27:cb:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/disk.qcow2
	I0429 04:58:40.438544    8394 main.go:141] libmachine: STDOUT: 
	I0429 04:58:40.438560    8394 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:58:40.438586    8394 fix.go:56] duration metric: took 14.391583ms for fixHost
	I0429 04:58:40.438591    8394 start.go:83] releasing machines lock for "kubernetes-upgrade-894000", held for 14.407667ms
	W0429 04:58:40.438596    8394 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:58:40.438622    8394 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:58:40.438626    8394 start.go:728] Will try again in 5 seconds ...
	I0429 04:58:45.440814    8394 start.go:360] acquireMachinesLock for kubernetes-upgrade-894000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:58:45.441259    8394 start.go:364] duration metric: took 364.042µs to acquireMachinesLock for "kubernetes-upgrade-894000"
	I0429 04:58:45.441328    8394 start.go:96] Skipping create...Using existing machine configuration
	I0429 04:58:45.441343    8394 fix.go:54] fixHost starting: 
	I0429 04:58:45.441888    8394 fix.go:112] recreateIfNeeded on kubernetes-upgrade-894000: state=Stopped err=<nil>
	W0429 04:58:45.441905    8394 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 04:58:45.445375    8394 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-894000" ...
	I0429 04:58:45.456718    8394 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:71:39:27:cb:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubernetes-upgrade-894000/disk.qcow2
	I0429 04:58:45.461667    8394 main.go:141] libmachine: STDOUT: 
	I0429 04:58:45.461716    8394 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 04:58:45.461763    8394 fix.go:56] duration metric: took 20.423625ms for fixHost
	I0429 04:58:45.461774    8394 start.go:83] releasing machines lock for "kubernetes-upgrade-894000", held for 20.499125ms
	W0429 04:58:45.461890    8394 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-894000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-894000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 04:58:45.469316    8394 out.go:177] 
	W0429 04:58:45.472283    8394 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 04:58:45.472295    8394 out.go:239] * 
	* 
	W0429 04:58:45.473413    8394 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 04:58:45.485390    8394 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-894000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-894000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-894000 version --output=json: exit status 1 (45.427584ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-894000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-04-29 04:58:45.541464 -0700 PDT m=+898.147579960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-894000 -n kubernetes-upgrade-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-894000 -n kubernetes-upgrade-894000: exit status 7 (32.91575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-894000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-894000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-894000
--- FAIL: TestKubernetesUpgrade (18.14s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.11s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0 on darwin (arm64)
- MINIKUBE_LOCATION=18771
- KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3108466331/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.11s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.9s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0 on darwin (arm64)
- MINIKUBE_LOCATION=18771
- KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current56172148/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (580.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.409167081 start -p stopped-upgrade-383000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.409167081 start -p stopped-upgrade-383000 --memory=2200 --vm-driver=qemu2 : (45.323228667s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.409167081 -p stopped-upgrade-383000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.409167081 -p stopped-upgrade-383000 stop: (12.109877083s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-383000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-383000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.894671458s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-383000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-383000" primary control-plane node in "stopped-upgrade-383000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-383000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:59:44.210421    8430 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:59:44.210591    8430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:59:44.210595    8430 out.go:304] Setting ErrFile to fd 2...
	I0429 04:59:44.210597    8430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:59:44.210755    8430 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:59:44.211924    8430 out.go:298] Setting JSON to false
	I0429 04:59:44.230312    8430 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5355,"bootTime":1714386629,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:59:44.230388    8430 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:59:44.234644    8430 out.go:177] * [stopped-upgrade-383000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:59:44.242551    8430 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 04:59:44.242627    8430 notify.go:220] Checking for updates...
	I0429 04:59:44.249486    8430 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:59:44.257483    8430 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:59:44.260526    8430 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:59:44.264489    8430 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 04:59:44.267527    8430 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:59:44.270745    8430 config.go:182] Loaded profile config "stopped-upgrade-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 04:59:44.273475    8430 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0429 04:59:44.276473    8430 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:59:44.280531    8430 out.go:177] * Using the qemu2 driver based on existing profile
	I0429 04:59:44.287456    8430 start.go:297] selected driver: qemu2
	I0429 04:59:44.287464    8430 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-383000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51384 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-383000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0429 04:59:44.287516    8430 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:59:44.290219    8430 cni.go:84] Creating CNI manager for ""
	I0429 04:59:44.290239    8430 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 04:59:44.290270    8430 start.go:340] cluster config:
	{Name:stopped-upgrade-383000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51384 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-383000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0429 04:59:44.290332    8430 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:59:44.298350    8430 out.go:177] * Starting "stopped-upgrade-383000" primary control-plane node in "stopped-upgrade-383000" cluster
	I0429 04:59:44.302503    8430 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0429 04:59:44.302524    8430 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0429 04:59:44.302535    8430 cache.go:56] Caching tarball of preloaded images
	I0429 04:59:44.302603    8430 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 04:59:44.302609    8430 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0429 04:59:44.302660    8430 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/config.json ...
	I0429 04:59:44.303166    8430 start.go:360] acquireMachinesLock for stopped-upgrade-383000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 04:59:44.303202    8430 start.go:364] duration metric: took 29.958µs to acquireMachinesLock for "stopped-upgrade-383000"
	I0429 04:59:44.303212    8430 start.go:96] Skipping create...Using existing machine configuration
	I0429 04:59:44.303217    8430 fix.go:54] fixHost starting: 
	I0429 04:59:44.303328    8430 fix.go:112] recreateIfNeeded on stopped-upgrade-383000: state=Stopped err=<nil>
	W0429 04:59:44.303343    8430 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 04:59:44.311490    8430 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-383000" ...
	I0429 04:59:44.315540    8430 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/stopped-upgrade-383000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/stopped-upgrade-383000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/stopped-upgrade-383000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51351-:22,hostfwd=tcp::51352-:2376,hostname=stopped-upgrade-383000 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/stopped-upgrade-383000/disk.qcow2
	I0429 04:59:44.363395    8430 main.go:141] libmachine: STDOUT: 
	I0429 04:59:44.363447    8430 main.go:141] libmachine: STDERR: 
	I0429 04:59:44.363452    8430 main.go:141] libmachine: Waiting for VM to start (ssh -p 51351 docker@127.0.0.1)...
	I0429 05:00:04.218412    8430 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/config.json ...
	I0429 05:00:04.219456    8430 machine.go:94] provisionDockerMachine start ...
	I0429 05:00:04.219581    8430 main.go:141] libmachine: Using SSH client type: native
	I0429 05:00:04.219928    8430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004bdc80] 0x1004c04e0 <nil>  [] 0s} localhost 51351 <nil> <nil>}
	I0429 05:00:04.219941    8430 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 05:00:04.296106    8430 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 05:00:04.296122    8430 buildroot.go:166] provisioning hostname "stopped-upgrade-383000"
	I0429 05:00:04.296190    8430 main.go:141] libmachine: Using SSH client type: native
	I0429 05:00:04.296332    8430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004bdc80] 0x1004c04e0 <nil>  [] 0s} localhost 51351 <nil> <nil>}
	I0429 05:00:04.296339    8430 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-383000 && echo "stopped-upgrade-383000" | sudo tee /etc/hostname
	I0429 05:00:04.365595    8430 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-383000
	
	I0429 05:00:04.365664    8430 main.go:141] libmachine: Using SSH client type: native
	I0429 05:00:04.365804    8430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004bdc80] 0x1004c04e0 <nil>  [] 0s} localhost 51351 <nil> <nil>}
	I0429 05:00:04.365813    8430 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-383000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-383000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-383000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 05:00:04.431806    8430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 05:00:04.431822    8430 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18771-6092/.minikube CaCertPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18771-6092/.minikube}
	I0429 05:00:04.431830    8430 buildroot.go:174] setting up certificates
	I0429 05:00:04.431835    8430 provision.go:84] configureAuth start
	I0429 05:00:04.431840    8430 provision.go:143] copyHostCerts
	I0429 05:00:04.431925    8430 exec_runner.go:144] found /Users/jenkins/minikube-integration/18771-6092/.minikube/ca.pem, removing ...
	I0429 05:00:04.431933    8430 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18771-6092/.minikube/ca.pem
	I0429 05:00:04.432047    8430 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18771-6092/.minikube/ca.pem (1082 bytes)
	I0429 05:00:04.432246    8430 exec_runner.go:144] found /Users/jenkins/minikube-integration/18771-6092/.minikube/cert.pem, removing ...
	I0429 05:00:04.432251    8430 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18771-6092/.minikube/cert.pem
	I0429 05:00:04.432312    8430 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18771-6092/.minikube/cert.pem (1123 bytes)
	I0429 05:00:04.432442    8430 exec_runner.go:144] found /Users/jenkins/minikube-integration/18771-6092/.minikube/key.pem, removing ...
	I0429 05:00:04.432446    8430 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18771-6092/.minikube/key.pem
	I0429 05:00:04.432501    8430 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18771-6092/.minikube/key.pem (1679 bytes)
	I0429 05:00:04.432615    8430 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-383000 san=[127.0.0.1 localhost minikube stopped-upgrade-383000]
	I0429 05:00:04.542191    8430 provision.go:177] copyRemoteCerts
	I0429 05:00:04.542232    8430 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 05:00:04.542240    8430 sshutil.go:53] new ssh client: &{IP:localhost Port:51351 SSHKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/stopped-upgrade-383000/id_rsa Username:docker}
	I0429 05:00:04.574403    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0429 05:00:04.580951    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 05:00:04.587636    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 05:00:04.594984    8430 provision.go:87] duration metric: took 163.139417ms to configureAuth
	I0429 05:00:04.594993    8430 buildroot.go:189] setting minikube options for container-runtime
	I0429 05:00:04.595097    8430 config.go:182] Loaded profile config "stopped-upgrade-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 05:00:04.595136    8430 main.go:141] libmachine: Using SSH client type: native
	I0429 05:00:04.595223    8430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004bdc80] 0x1004c04e0 <nil>  [] 0s} localhost 51351 <nil> <nil>}
	I0429 05:00:04.595228    8430 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 05:00:04.657067    8430 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 05:00:04.657076    8430 buildroot.go:70] root file system type: tmpfs
	I0429 05:00:04.657131    8430 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 05:00:04.657177    8430 main.go:141] libmachine: Using SSH client type: native
	I0429 05:00:04.657288    8430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004bdc80] 0x1004c04e0 <nil>  [] 0s} localhost 51351 <nil> <nil>}
	I0429 05:00:04.657328    8430 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 05:00:04.720326    8430 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 05:00:04.720372    8430 main.go:141] libmachine: Using SSH client type: native
	I0429 05:00:04.720477    8430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004bdc80] 0x1004c04e0 <nil>  [] 0s} localhost 51351 <nil> <nil>}
	I0429 05:00:04.720485    8430 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 05:00:05.057917    8430 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 05:00:05.057932    8430 machine.go:97] duration metric: took 838.465709ms to provisionDockerMachine
	I0429 05:00:05.057939    8430 start.go:293] postStartSetup for "stopped-upgrade-383000" (driver="qemu2")
	I0429 05:00:05.057945    8430 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 05:00:05.058014    8430 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 05:00:05.058023    8430 sshutil.go:53] new ssh client: &{IP:localhost Port:51351 SSHKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/stopped-upgrade-383000/id_rsa Username:docker}
	I0429 05:00:05.090389    8430 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 05:00:05.091622    8430 info.go:137] Remote host: Buildroot 2021.02.12
	I0429 05:00:05.091630    8430 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18771-6092/.minikube/addons for local assets ...
	I0429 05:00:05.091707    8430 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18771-6092/.minikube/files for local assets ...
	I0429 05:00:05.091828    8430 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18771-6092/.minikube/files/etc/ssl/certs/65002.pem -> 65002.pem in /etc/ssl/certs
	I0429 05:00:05.091961    8430 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 05:00:05.094758    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/files/etc/ssl/certs/65002.pem --> /etc/ssl/certs/65002.pem (1708 bytes)
	I0429 05:00:05.101609    8430 start.go:296] duration metric: took 43.665125ms for postStartSetup
	I0429 05:00:05.101622    8430 fix.go:56] duration metric: took 20.798449s for fixHost
	I0429 05:00:05.101658    8430 main.go:141] libmachine: Using SSH client type: native
	I0429 05:00:05.102083    8430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004bdc80] 0x1004c04e0 <nil>  [] 0s} localhost 51351 <nil> <nil>}
	I0429 05:00:05.102104    8430 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0429 05:00:05.163755    8430 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714392004.751566671
	
	I0429 05:00:05.163765    8430 fix.go:216] guest clock: 1714392004.751566671
	I0429 05:00:05.163769    8430 fix.go:229] Guest: 2024-04-29 05:00:04.751566671 -0700 PDT Remote: 2024-04-29 05:00:05.101624 -0700 PDT m=+20.926633001 (delta=-350.057329ms)
	I0429 05:00:05.163781    8430 fix.go:200] guest clock delta is within tolerance: -350.057329ms
	I0429 05:00:05.163785    8430 start.go:83] releasing machines lock for "stopped-upgrade-383000", held for 20.860621666s
	I0429 05:00:05.163850    8430 ssh_runner.go:195] Run: cat /version.json
	I0429 05:00:05.163860    8430 sshutil.go:53] new ssh client: &{IP:localhost Port:51351 SSHKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/stopped-upgrade-383000/id_rsa Username:docker}
	I0429 05:00:05.163930    8430 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 05:00:05.163986    8430 sshutil.go:53] new ssh client: &{IP:localhost Port:51351 SSHKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/stopped-upgrade-383000/id_rsa Username:docker}
	W0429 05:00:05.164485    8430 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51470->127.0.0.1:51351: write: broken pipe
	I0429 05:00:05.164508    8430 retry.go:31] will retry after 200.187864ms: ssh: handshake failed: write tcp 127.0.0.1:51470->127.0.0.1:51351: write: broken pipe
	W0429 05:00:05.402188    8430 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0429 05:00:05.402278    8430 ssh_runner.go:195] Run: systemctl --version
	I0429 05:00:05.405112    8430 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 05:00:05.407011    8430 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 05:00:05.407042    8430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0429 05:00:05.410081    8430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0429 05:00:05.414960    8430 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 05:00:05.414969    8430 start.go:494] detecting cgroup driver to use...
	I0429 05:00:05.415046    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 05:00:05.424369    8430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0429 05:00:05.429531    8430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 05:00:05.433716    8430 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 05:00:05.433771    8430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 05:00:05.440294    8430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 05:00:05.443643    8430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 05:00:05.447055    8430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 05:00:05.450411    8430 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 05:00:05.453318    8430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 05:00:05.456102    8430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 05:00:05.459561    8430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 05:00:05.463081    8430 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 05:00:05.465852    8430 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 05:00:05.468425    8430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 05:00:05.529499    8430 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 05:00:05.540024    8430 start.go:494] detecting cgroup driver to use...
	I0429 05:00:05.540117    8430 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 05:00:05.545427    8430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 05:00:05.550475    8430 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 05:00:05.558453    8430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 05:00:05.562854    8430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 05:00:05.567396    8430 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 05:00:05.613893    8430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 05:00:05.618471    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 05:00:05.624092    8430 ssh_runner.go:195] Run: which cri-dockerd
	I0429 05:00:05.625561    8430 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 05:00:05.628173    8430 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 05:00:05.633340    8430 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 05:00:05.693607    8430 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 05:00:05.769470    8430 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 05:00:05.769532    8430 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 05:00:05.774527    8430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 05:00:05.833369    8430 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 05:00:06.981190    8430 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.14780725s)
	I0429 05:00:06.981257    8430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 05:00:06.985808    8430 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0429 05:00:06.991544    8430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 05:00:06.995859    8430 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 05:00:07.073617    8430 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 05:00:07.133482    8430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 05:00:07.195265    8430 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 05:00:07.201537    8430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 05:00:07.206020    8430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 05:00:07.264643    8430 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 05:00:07.303664    8430 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 05:00:07.303748    8430 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 05:00:07.306171    8430 start.go:562] Will wait 60s for crictl version
	I0429 05:00:07.306242    8430 ssh_runner.go:195] Run: which crictl
	I0429 05:00:07.307972    8430 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 05:00:07.323718    8430 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0429 05:00:07.323788    8430 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 05:00:07.340696    8430 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 05:00:07.361441    8430 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0429 05:00:07.361561    8430 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0429 05:00:07.362735    8430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 05:00:07.366457    8430 kubeadm.go:877] updating cluster {Name:stopped-upgrade-383000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51384 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-383000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0429 05:00:07.366503    8430 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0429 05:00:07.366546    8430 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 05:00:07.376997    8430 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 05:00:07.377012    8430 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0429 05:00:07.377062    8430 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 05:00:07.380032    8430 ssh_runner.go:195] Run: which lz4
	I0429 05:00:07.381368    8430 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0429 05:00:07.382609    8430 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 05:00:07.382618    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0429 05:00:08.083093    8430 docker.go:649] duration metric: took 701.758667ms to copy over tarball
	I0429 05:00:08.083173    8430 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 05:00:09.274978    8430 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.191792291s)
	I0429 05:00:09.274993    8430 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 05:00:09.290720    8430 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 05:00:09.294017    8430 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0429 05:00:09.298944    8430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 05:00:09.364728    8430 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 05:00:11.022646    8430 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.657901167s)
	I0429 05:00:11.022738    8430 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 05:00:11.035239    8430 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 05:00:11.035251    8430 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0429 05:00:11.035256    8430 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 05:00:11.041773    8430 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0429 05:00:11.041806    8430 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0429 05:00:11.041888    8430 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 05:00:11.041990    8430 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0429 05:00:11.042074    8430 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0429 05:00:11.042196    8430 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0429 05:00:11.042243    8430 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0429 05:00:11.043018    8430 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0429 05:00:11.051685    8430 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0429 05:00:11.051750    8430 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0429 05:00:11.051819    8430 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 05:00:11.052252    8430 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0429 05:00:11.052573    8430 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0429 05:00:11.052603    8430 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0429 05:00:11.052662    8430 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0429 05:00:11.052649    8430 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	W0429 05:00:11.841500    8430 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0429 05:00:11.841908    8430 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 05:00:11.872797    8430 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0429 05:00:11.872845    8430 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 05:00:11.872947    8430 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 05:00:11.896880    8430 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0429 05:00:11.897015    8430 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0429 05:00:11.899064    8430 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0429 05:00:11.899079    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0429 05:00:11.925667    8430 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0429 05:00:11.925680    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0429 05:00:12.163001    8430 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0429 05:00:13.230913    8430 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0429 05:00:13.256335    8430 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0429 05:00:13.256370    8430 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0429 05:00:13.256460    8430 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0429 05:00:13.273792    8430 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0429 05:00:13.338832    8430 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0429 05:00:13.353868    8430 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0429 05:00:13.353895    8430 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0429 05:00:13.353960    8430 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0429 05:00:13.366317    8430 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0429 05:00:13.377563    8430 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0429 05:00:13.378648    8430 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0429 05:00:13.394876    8430 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0429 05:00:13.394900    8430 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0429 05:00:13.394905    8430 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0429 05:00:13.394916    8430 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0429 05:00:13.394964    8430 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0429 05:00:13.394964    8430 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0429 05:00:13.405429    8430 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0429 05:00:13.405563    8430 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0429 05:00:13.406561    8430 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0429 05:00:13.407514    8430 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0429 05:00:13.407527    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0429 05:00:13.415149    8430 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0429 05:00:13.415158    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0429 05:00:13.441939    8430 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0429 05:00:13.947068    8430 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	W0429 05:00:13.958767    8430 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0429 05:00:13.959209    8430 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0429 05:00:14.000932    8430 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0429 05:00:14.000967    8430 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0429 05:00:14.001004    8430 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0429 05:00:14.001032    8430 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0429 05:00:14.001060    8430 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0429 05:00:14.001075    8430 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0429 05:00:14.008724    8430 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0429 05:00:14.030773    8430 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0429 05:00:14.030779    8430 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0429 05:00:14.030916    8430 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0429 05:00:14.035132    8430 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0429 05:00:14.035152    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0429 05:00:14.035261    8430 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0429 05:00:14.035278    8430 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0429 05:00:14.035324    8430 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0429 05:00:14.062692    8430 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0429 05:00:14.073529    8430 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0429 05:00:14.073543    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0429 05:00:14.109032    8430 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0429 05:00:14.109072    8430 cache_images.go:92] duration metric: took 3.0738165s to LoadCachedImages
	W0429 05:00:14.109116    8430 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	I0429 05:00:14.109122    8430 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0429 05:00:14.109179    8430 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-383000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-383000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 05:00:14.109242    8430 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 05:00:14.122370    8430 cni.go:84] Creating CNI manager for ""
	I0429 05:00:14.122384    8430 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 05:00:14.122389    8430 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 05:00:14.122397    8430 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-383000 NodeName:stopped-upgrade-383000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 05:00:14.122470    8430 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-383000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 05:00:14.122528    8430 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0429 05:00:14.125612    8430 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 05:00:14.125640    8430 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 05:00:14.128296    8430 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0429 05:00:14.133306    8430 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 05:00:14.138038    8430 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0429 05:00:14.143557    8430 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0429 05:00:14.144779    8430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 05:00:14.148078    8430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 05:00:14.213907    8430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 05:00:14.225431    8430 certs.go:68] Setting up /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000 for IP: 10.0.2.15
	I0429 05:00:14.225441    8430 certs.go:194] generating shared ca certs ...
	I0429 05:00:14.225449    8430 certs.go:226] acquiring lock for ca certs: {Name:mk6c1fe0c368234e15356f74a5a8907d9d0bc3ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:00:14.225622    8430 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/ca.key
	I0429 05:00:14.225844    8430 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/proxy-client-ca.key
	I0429 05:00:14.225851    8430 certs.go:256] generating profile certs ...
	I0429 05:00:14.226042    8430 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/client.key
	I0429 05:00:14.226078    8430 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.key.fc72e758
	I0429 05:00:14.226091    8430 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.crt.fc72e758 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0429 05:00:14.349165    8430 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.crt.fc72e758 ...
	I0429 05:00:14.349181    8430 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.crt.fc72e758: {Name:mk90f388eda2edfb8de5b5afa7533ff52d4f49e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:00:14.349502    8430 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.key.fc72e758 ...
	I0429 05:00:14.349506    8430 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.key.fc72e758: {Name:mkf949702a83e58fb4b946f45ffcc95bbbfbdaa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:00:14.349645    8430 certs.go:381] copying /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.crt.fc72e758 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.crt
	I0429 05:00:14.349782    8430 certs.go:385] copying /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.key.fc72e758 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.key
	I0429 05:00:14.350180    8430 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/proxy-client.key
	I0429 05:00:14.350351    8430 certs.go:484] found cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/6500.pem (1338 bytes)
	W0429 05:00:14.350555    8430 certs.go:480] ignoring /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/6500_empty.pem, impossibly tiny 0 bytes
	I0429 05:00:14.350560    8430 certs.go:484] found cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 05:00:14.350590    8430 certs.go:484] found cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem (1082 bytes)
	I0429 05:00:14.350616    8430 certs.go:484] found cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem (1123 bytes)
	I0429 05:00:14.350635    8430 certs.go:484] found cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/key.pem (1679 bytes)
	I0429 05:00:14.350676    8430 certs.go:484] found cert: /Users/jenkins/minikube-integration/18771-6092/.minikube/files/etc/ssl/certs/65002.pem (1708 bytes)
	I0429 05:00:14.351012    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 05:00:14.357861    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 05:00:14.364654    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 05:00:14.371730    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 05:00:14.378547    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0429 05:00:14.385065    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 05:00:14.391362    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 05:00:14.398059    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 05:00:14.404314    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 05:00:14.410969    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/6500.pem --> /usr/share/ca-certificates/6500.pem (1338 bytes)
	I0429 05:00:14.418064    8430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18771-6092/.minikube/files/etc/ssl/certs/65002.pem --> /usr/share/ca-certificates/65002.pem (1708 bytes)
	I0429 05:00:14.424250    8430 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 05:00:14.429665    8430 ssh_runner.go:195] Run: openssl version
	I0429 05:00:14.431425    8430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 05:00:14.434452    8430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 05:00:14.435753    8430 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I0429 05:00:14.435773    8430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 05:00:14.437508    8430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 05:00:14.440323    8430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6500.pem && ln -fs /usr/share/ca-certificates/6500.pem /etc/ssl/certs/6500.pem"
	I0429 05:00:14.443405    8430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6500.pem
	I0429 05:00:14.444876    8430 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 11:44 /usr/share/ca-certificates/6500.pem
	I0429 05:00:14.444902    8430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6500.pem
	I0429 05:00:14.446721    8430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6500.pem /etc/ssl/certs/51391683.0"
	I0429 05:00:14.449552    8430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65002.pem && ln -fs /usr/share/ca-certificates/65002.pem /etc/ssl/certs/65002.pem"
	I0429 05:00:14.452314    8430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65002.pem
	I0429 05:00:14.453665    8430 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 11:44 /usr/share/ca-certificates/65002.pem
	I0429 05:00:14.453685    8430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65002.pem
	I0429 05:00:14.455411    8430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65002.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 05:00:14.458662    8430 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 05:00:14.460138    8430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 05:00:14.462606    8430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 05:00:14.464347    8430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 05:00:14.466153    8430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 05:00:14.467888    8430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 05:00:14.469623    8430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 05:00:14.471351    8430 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-383000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51384 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-383000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0429 05:00:14.471417    8430 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 05:00:14.481596    8430 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 05:00:14.484699    8430 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 05:00:14.484706    8430 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 05:00:14.484709    8430 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 05:00:14.484729    8430 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 05:00:14.487604    8430 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 05:00:14.487901    8430 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-383000" does not appear in /Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:00:14.487995    8430 kubeconfig.go:62] /Users/jenkins/minikube-integration/18771-6092/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-383000" cluster setting kubeconfig missing "stopped-upgrade-383000" context setting]
	I0429 05:00:14.488208    8430 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/kubeconfig: {Name:mkc4105502c44b2331a2dd91226134a74ad93594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:00:14.488633    8430 kapi.go:59] client config for stopped-upgrade-383000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/client.key", CAFile:"/Users/jenkins/minikube-integration/18771-6092/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10184fcb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 05:00:14.489089    8430 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 05:00:14.491715    8430 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-383000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0429 05:00:14.491720    8430 kubeadm.go:1154] stopping kube-system containers ...
	I0429 05:00:14.491760    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 05:00:14.510801    8430 docker.go:483] Stopping containers: [354faa34ac46 80cae0f8410a 63c844e608a1 524a65bbf479 dbef1337b10c e5b938769f45 f4864e330600 dad4d6abc111]
	I0429 05:00:14.510862    8430 ssh_runner.go:195] Run: docker stop 354faa34ac46 80cae0f8410a 63c844e608a1 524a65bbf479 dbef1337b10c e5b938769f45 f4864e330600 dad4d6abc111
	I0429 05:00:14.526008    8430 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 05:00:14.531369    8430 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 05:00:14.534608    8430 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 05:00:14.534620    8430 kubeadm.go:156] found existing configuration files:
	
	I0429 05:00:14.534644    8430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/admin.conf
	I0429 05:00:14.537143    8430 kubeadm.go:162] "https://control-plane.minikube.internal:51384" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 05:00:14.537164    8430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 05:00:14.539861    8430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/kubelet.conf
	I0429 05:00:14.542834    8430 kubeadm.go:162] "https://control-plane.minikube.internal:51384" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 05:00:14.542853    8430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 05:00:14.545408    8430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/controller-manager.conf
	I0429 05:00:14.547753    8430 kubeadm.go:162] "https://control-plane.minikube.internal:51384" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 05:00:14.547774    8430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 05:00:14.550633    8430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/scheduler.conf
	I0429 05:00:14.553021    8430 kubeadm.go:162] "https://control-plane.minikube.internal:51384" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 05:00:14.553043    8430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 05:00:14.555706    8430 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 05:00:14.558769    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 05:00:14.582079    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 05:00:14.938672    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 05:00:15.053144    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 05:00:15.073817    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 05:00:15.094035    8430 api_server.go:52] waiting for apiserver process to appear ...
	I0429 05:00:15.094106    8430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 05:00:15.595399    8430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 05:00:16.096213    8430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 05:00:16.100543    8430 api_server.go:72] duration metric: took 1.006511625s to wait for apiserver process to appear ...
	I0429 05:00:16.100551    8430 api_server.go:88] waiting for apiserver healthz status ...
	I0429 05:00:16.100560    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:21.102753    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:21.102882    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:26.103823    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:26.103927    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:31.104995    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:31.105194    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:36.106595    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:36.106672    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:41.108378    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:41.108442    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:46.110049    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:46.110135    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:51.112683    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:51.112726    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:00:56.113840    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:00:56.113862    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:01.116090    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:01.116146    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:06.118211    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:06.118253    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:11.119635    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:11.119685    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:16.121940    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:16.122171    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:01:16.140194    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:01:16.140306    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:01:16.153403    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:01:16.153482    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:01:16.165183    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:01:16.165249    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:01:16.175871    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:01:16.175942    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:01:16.190318    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:01:16.190388    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:01:16.200982    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:01:16.201054    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:01:16.211881    8430 logs.go:276] 0 containers: []
	W0429 05:01:16.211893    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:01:16.211951    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:01:16.222292    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:01:16.222309    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:01:16.222314    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:01:16.235971    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:01:16.235994    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:01:16.250078    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:01:16.250092    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:01:16.261833    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:01:16.261844    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:01:16.272855    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:01:16.272865    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:01:16.285024    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:01:16.285035    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:01:16.324171    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:01:16.324181    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:01:16.339533    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:01:16.339549    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:01:16.358511    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:01:16.358526    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:01:16.370223    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:01:16.370237    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:01:16.385991    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:01:16.386004    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:01:16.390330    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:01:16.390337    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:01:16.491678    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:01:16.491702    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:01:16.519465    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:01:16.519476    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:01:16.531213    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:01:16.531226    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:01:16.555322    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:01:16.555331    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:01:16.574443    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:01:16.574456    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:01:19.102495    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:24.103013    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:24.103429    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:01:24.138266    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:01:24.138410    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:01:24.158694    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:01:24.158792    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:01:24.172793    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:01:24.172869    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:01:24.185147    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:01:24.185216    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:01:24.195876    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:01:24.195948    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:01:24.210609    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:01:24.214698    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:01:24.224636    8430 logs.go:276] 0 containers: []
	W0429 05:01:24.224649    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:01:24.224705    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:01:24.234666    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:01:24.234689    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:01:24.234694    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:01:24.259493    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:01:24.259508    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:01:24.272122    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:01:24.272134    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:01:24.283400    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:01:24.283411    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:01:24.295390    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:01:24.295406    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:01:24.299405    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:01:24.299413    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:01:24.338523    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:01:24.338534    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:01:24.362580    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:01:24.362591    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:01:24.379513    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:01:24.379529    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:01:24.395033    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:01:24.395049    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:01:24.406346    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:01:24.406356    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:01:24.423264    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:01:24.423274    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:01:24.434658    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:01:24.434668    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:01:24.459593    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:01:24.459600    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:01:24.498150    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:01:24.498169    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:01:24.511896    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:01:24.511908    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:01:24.525757    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:01:24.525771    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:01:27.039868    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:32.042253    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:32.042454    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:01:32.064592    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:01:32.064707    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:01:32.080007    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:01:32.080095    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:01:32.092641    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:01:32.092708    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:01:32.104029    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:01:32.104097    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:01:32.114377    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:01:32.114441    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:01:32.124589    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:01:32.124651    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:01:32.134893    8430 logs.go:276] 0 containers: []
	W0429 05:01:32.134906    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:01:32.134961    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:01:32.145346    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:01:32.145363    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:01:32.145368    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:01:32.161706    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:01:32.161716    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:01:32.173905    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:01:32.173916    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:01:32.198081    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:01:32.198088    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:01:32.209997    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:01:32.210007    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:01:32.224717    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:01:32.224727    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:01:32.250411    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:01:32.250422    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:01:32.262210    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:01:32.262222    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:01:32.273922    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:01:32.273934    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:01:32.294711    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:01:32.294723    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:01:32.306017    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:01:32.306030    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:01:32.344968    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:01:32.344982    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:01:32.359875    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:01:32.359885    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:01:32.371019    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:01:32.371031    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:01:32.406347    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:01:32.406360    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:01:32.421155    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:01:32.421168    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:01:32.438574    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:01:32.438585    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:01:34.944948    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:39.947647    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:39.947850    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:01:39.962074    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:01:39.962155    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:01:39.973614    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:01:39.973672    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:01:39.983949    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:01:39.984011    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:01:39.994105    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:01:39.994172    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:01:40.004701    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:01:40.004795    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:01:40.015279    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:01:40.015344    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:01:40.025930    8430 logs.go:276] 0 containers: []
	W0429 05:01:40.025943    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:01:40.025998    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:01:40.036312    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:01:40.036332    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:01:40.036338    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:01:40.074738    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:01:40.074749    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:01:40.086548    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:01:40.086560    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:01:40.098190    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:01:40.098201    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:01:40.143354    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:01:40.143370    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:01:40.155748    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:01:40.155759    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:01:40.166943    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:01:40.166954    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:01:40.179081    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:01:40.179093    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:01:40.193407    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:01:40.193417    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:01:40.214387    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:01:40.214397    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:01:40.225726    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:01:40.225737    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:01:40.240497    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:01:40.240507    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:01:40.264844    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:01:40.264855    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:01:40.269548    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:01:40.269555    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:01:40.294546    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:01:40.294558    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:01:40.308581    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:01:40.308592    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:01:40.323431    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:01:40.323442    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:01:42.843155    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:47.845553    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:47.845891    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:01:47.881537    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:01:47.881687    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:01:47.902608    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:01:47.902700    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:01:47.917429    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:01:47.917510    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:01:47.929388    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:01:47.929459    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:01:47.940175    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:01:47.940248    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:01:47.951245    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:01:47.951312    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:01:47.961192    8430 logs.go:276] 0 containers: []
	W0429 05:01:47.961205    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:01:47.961265    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:01:47.971560    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:01:47.971577    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:01:47.971582    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:01:47.986308    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:01:47.986321    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:01:47.997902    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:01:47.997916    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:01:48.036463    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:01:48.036475    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:01:48.050612    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:01:48.050622    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:01:48.074920    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:01:48.074929    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:01:48.086725    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:01:48.086735    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:01:48.097865    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:01:48.097881    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:01:48.123449    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:01:48.123457    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:01:48.135212    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:01:48.135223    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:01:48.170375    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:01:48.170391    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:01:48.182531    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:01:48.182543    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:01:48.197338    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:01:48.197349    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:01:48.202083    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:01:48.202093    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:01:48.213814    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:01:48.213825    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:01:48.228203    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:01:48.228213    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:01:48.247116    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:01:48.247131    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:01:50.773881    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:01:55.776210    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:01:55.776288    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:01:55.786657    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:01:55.786735    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:01:55.796877    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:01:55.796953    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:01:55.807996    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:01:55.808057    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:01:55.819809    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:01:55.819879    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:01:55.830109    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:01:55.830168    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:01:55.840888    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:01:55.840968    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:01:55.850771    8430 logs.go:276] 0 containers: []
	W0429 05:01:55.850782    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:01:55.850833    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:01:55.861022    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:01:55.861042    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:01:55.861048    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:01:55.873143    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:01:55.873153    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:01:55.884532    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:01:55.884547    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:01:55.897465    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:01:55.897477    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:01:55.901918    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:01:55.901925    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:01:55.930551    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:01:55.930565    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:01:55.941956    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:01:55.941967    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:01:55.956758    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:01:55.956772    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:01:55.971859    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:01:55.971869    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:01:55.997390    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:01:55.997397    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:01:56.032413    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:01:56.032427    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:01:56.046734    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:01:56.046745    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:01:56.060981    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:01:56.060994    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:01:56.075304    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:01:56.075314    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:01:56.111761    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:01:56.111772    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:01:56.129280    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:01:56.129291    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:01:56.144154    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:01:56.144164    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:01:58.657869    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:03.660240    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:03.660398    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:03.677927    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:02:03.678004    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:03.691682    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:02:03.691749    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:03.702233    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:02:03.702301    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:03.712905    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:02:03.712970    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:03.723392    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:02:03.723464    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:03.736623    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:02:03.736701    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:03.747389    8430 logs.go:276] 0 containers: []
	W0429 05:02:03.747401    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:03.747463    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:03.760880    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:02:03.760897    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:02:03.760903    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:02:03.772595    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:03.772604    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:02:03.810212    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:02:03.810220    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:02:03.834403    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:02:03.834414    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:02:03.848685    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:02:03.848695    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:02:03.871240    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:02:03.871251    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:02:03.885869    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:03.885882    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:02:03.921675    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:02:03.921685    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:02:03.937036    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:02:03.937047    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:02:03.955359    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:02:03.955373    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:02:03.970093    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:03.970104    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:03.993900    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:02:03.993910    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:04.005457    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:04.005469    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:04.009841    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:02:04.009848    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:02:04.021804    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:02:04.021815    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:02:04.037616    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:02:04.037627    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:02:04.055580    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:02:04.055591    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:02:06.570693    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:11.573057    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:11.573247    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:11.591177    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:02:11.591261    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:11.605028    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:02:11.605093    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:11.616407    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:02:11.616471    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:11.627064    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:02:11.627136    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:11.638181    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:02:11.638243    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:11.648660    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:02:11.648730    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:11.658825    8430 logs.go:276] 0 containers: []
	W0429 05:02:11.658836    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:11.658891    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:11.669559    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:02:11.669578    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:02:11.669583    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:11.682366    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:11.682377    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:02:11.720595    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:02:11.720606    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:02:11.735189    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:02:11.735200    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:02:11.747637    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:02:11.747648    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:02:11.764983    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:02:11.764995    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:02:11.776296    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:11.776308    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:02:11.812436    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:02:11.812449    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:02:11.827339    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:02:11.827352    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:02:11.841758    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:02:11.841769    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:02:11.852882    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:02:11.852894    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:02:11.864597    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:02:11.864612    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:02:11.877245    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:11.877258    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:11.881963    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:02:11.881969    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:02:11.895914    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:11.895924    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:11.919890    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:02:11.919897    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:02:11.946813    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:02:11.946826    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:02:14.469203    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:19.471634    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:19.471811    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:19.487798    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:02:19.487883    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:19.501217    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:02:19.501284    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:19.512270    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:02:19.512344    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:19.522827    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:02:19.522905    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:19.533576    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:02:19.533649    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:19.545513    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:02:19.545587    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:19.555799    8430 logs.go:276] 0 containers: []
	W0429 05:02:19.555811    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:19.555870    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:19.565691    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:02:19.565710    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:02:19.565716    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:02:19.579549    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:02:19.579560    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:02:19.591730    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:02:19.591741    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:02:19.606612    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:02:19.606621    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:02:19.621641    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:02:19.621652    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:02:19.634443    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:02:19.634457    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:02:19.648166    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:02:19.648180    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:02:19.673602    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:02:19.673613    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:02:19.689178    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:02:19.689190    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:02:19.700609    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:02:19.700622    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:02:19.718068    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:02:19.718079    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:19.730391    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:19.730404    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:02:19.768137    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:19.768146    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:19.772473    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:19.772488    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:02:19.807096    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:02:19.807109    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:02:19.820800    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:02:19.820809    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:02:19.832357    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:19.832373    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:22.357188    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:27.359623    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:27.359763    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:27.373810    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:02:27.373900    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:27.385510    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:02:27.385581    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:27.395670    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:02:27.395733    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:27.405545    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:02:27.405617    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:27.416134    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:02:27.416191    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:27.426539    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:02:27.426611    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:27.436780    8430 logs.go:276] 0 containers: []
	W0429 05:02:27.436791    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:27.436845    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:27.447677    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:02:27.447700    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:27.447706    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:27.451982    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:02:27.451988    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:02:27.466344    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:02:27.466355    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:02:27.482067    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:02:27.482077    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:27.496274    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:02:27.496288    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:02:27.511414    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:27.511430    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:27.535017    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:02:27.535025    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:02:27.559676    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:02:27.559685    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:02:27.574442    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:02:27.574454    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:02:27.586415    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:02:27.586426    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:02:27.604032    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:02:27.604042    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:02:27.615284    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:02:27.615295    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:02:27.626323    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:27.626336    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:02:27.664134    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:27.664144    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:02:27.699050    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:02:27.699063    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:02:27.713420    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:02:27.713431    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:02:27.725419    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:02:27.725430    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:02:30.245700    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:35.247988    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:35.248151    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:35.259460    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:02:35.259528    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:35.269989    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:02:35.270055    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:35.280633    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:02:35.280704    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:35.291205    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:02:35.291279    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:35.302365    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:02:35.302432    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:35.313000    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:02:35.313074    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:35.325868    8430 logs.go:276] 0 containers: []
	W0429 05:02:35.325882    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:35.325948    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:35.341151    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:02:35.341172    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:02:35.341178    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:02:35.352331    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:02:35.352345    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:02:35.363239    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:02:35.363251    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:35.376344    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:02:35.376359    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:02:35.392132    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:02:35.392145    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:02:35.403852    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:02:35.403864    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:02:35.421580    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:02:35.421594    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:02:35.433335    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:35.433345    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:35.458317    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:02:35.458323    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:02:35.472497    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:35.472507    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:35.476624    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:35.476631    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:02:35.511801    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:02:35.511810    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:02:35.527066    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:02:35.527078    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:02:35.539375    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:02:35.539386    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:02:35.554679    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:35.554692    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:02:35.590745    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:02:35.590755    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:02:35.604449    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:02:35.604483    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:02:38.131292    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:43.133816    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:43.133993    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:43.151945    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:02:43.152024    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:43.168429    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:02:43.168504    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:43.179521    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:02:43.179589    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:43.189672    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:02:43.189730    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:43.199718    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:02:43.199782    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:43.219222    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:02:43.219289    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:43.229559    8430 logs.go:276] 0 containers: []
	W0429 05:02:43.229570    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:43.229624    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:43.249681    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:02:43.249700    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:43.249706    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:43.272431    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:43.272439    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:02:43.308309    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:02:43.308316    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:02:43.322050    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:02:43.322060    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:02:43.334118    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:02:43.334129    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:43.346792    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:43.346804    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:02:43.381872    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:02:43.381882    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:02:43.395778    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:02:43.395789    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:02:43.407612    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:02:43.407622    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:02:43.422827    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:02:43.422838    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:02:43.434949    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:43.434961    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:43.439376    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:02:43.439385    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:02:43.467408    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:02:43.467422    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:02:43.482070    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:02:43.482080    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:02:43.497882    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:02:43.497892    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:02:43.509253    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:02:43.509264    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:02:43.521308    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:02:43.521318    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:02:46.040623    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:51.043002    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:51.043273    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:51.078146    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:02:51.078260    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:51.094569    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:02:51.094652    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:51.107159    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:02:51.107234    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:51.118892    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:02:51.118958    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:51.129375    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:02:51.129444    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:51.140156    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:02:51.140216    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:51.153672    8430 logs.go:276] 0 containers: []
	W0429 05:02:51.153685    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:51.153748    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:51.164285    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:02:51.164303    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:02:51.164309    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:02:51.187952    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:51.187962    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:51.192000    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:02:51.192006    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:02:51.205595    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:02:51.205605    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:02:51.219534    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:02:51.219545    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:02:51.234592    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:02:51.234603    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:02:51.249490    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:51.249503    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:02:51.287357    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:51.287367    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:02:51.325936    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:02:51.325948    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:02:51.355436    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:02:51.355448    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:51.367216    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:02:51.367230    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:02:51.378549    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:02:51.378561    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:02:51.394382    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:02:51.394397    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:02:51.406208    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:51.406219    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:51.429877    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:02:51.429885    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:02:51.444619    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:02:51.444634    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:02:51.456658    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:02:51.456673    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:02:53.970114    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:02:58.972648    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:02:58.973058    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:02:59.009075    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:02:59.009211    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:02:59.029243    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:02:59.029327    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:02:59.043874    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:02:59.043955    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:02:59.056664    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:02:59.056732    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:02:59.067117    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:02:59.067187    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:02:59.081483    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:02:59.081555    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:02:59.092441    8430 logs.go:276] 0 containers: []
	W0429 05:02:59.092452    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:02:59.092506    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:02:59.107690    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:02:59.107709    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:02:59.107714    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:02:59.126060    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:02:59.126075    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:02:59.148919    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:02:59.148931    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:02:59.161770    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:02:59.161781    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:02:59.173069    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:02:59.173081    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:02:59.198844    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:02:59.198854    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:02:59.211018    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:02:59.214442    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:02:59.229818    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:02:59.229828    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:02:59.241317    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:02:59.241328    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:02:59.264698    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:02:59.264705    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:02:59.305277    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:02:59.305290    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:02:59.310673    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:02:59.310681    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:02:59.326350    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:02:59.326361    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:02:59.345717    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:02:59.345731    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:02:59.361664    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:02:59.361678    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:02:59.373051    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:02:59.373064    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:02:59.384897    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:02:59.384915    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:01.921741    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:06.923990    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:03:06.924221    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:03:06.940336    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:03:06.940430    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:03:06.953454    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:03:06.953520    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:03:06.965952    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:03:06.966027    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:03:06.977847    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:03:06.977927    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:03:06.988624    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:03:06.988696    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:03:06.999425    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:03:06.999495    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:03:07.009291    8430 logs.go:276] 0 containers: []
	W0429 05:03:07.009303    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:03:07.009355    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:03:07.021491    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:03:07.021510    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:03:07.021515    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:03:07.032606    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:03:07.032622    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:03:07.048314    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:03:07.048324    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:03:07.085636    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:03:07.085648    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:03:07.099543    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:03:07.099560    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:03:07.117733    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:03:07.117744    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:03:07.129182    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:03:07.129197    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:03:07.151730    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:03:07.151740    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:03:07.166904    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:03:07.166919    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:03:07.171627    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:03:07.171654    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:07.207673    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:03:07.207684    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:03:07.221754    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:03:07.221765    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:03:07.246001    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:03:07.246017    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:03:07.262539    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:03:07.262550    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:03:07.275225    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:03:07.275238    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:03:07.288477    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:03:07.288488    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:03:07.299370    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:03:07.299381    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:03:09.813792    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:14.815627    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:03:14.815803    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:03:14.835043    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:03:14.835124    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:03:14.849159    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:03:14.849227    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:03:14.861058    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:03:14.861133    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:03:14.871560    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:03:14.871628    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:03:14.882225    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:03:14.882287    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:03:14.892744    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:03:14.892814    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:03:14.902451    8430 logs.go:276] 0 containers: []
	W0429 05:03:14.902463    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:03:14.902513    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:03:14.914621    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:03:14.914636    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:03:14.914641    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:03:14.925650    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:03:14.925664    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:03:14.950565    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:03:14.950575    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:03:14.962372    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:03:14.962383    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:03:14.977533    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:03:14.977549    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:03:14.989861    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:03:14.989871    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:03:15.004110    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:03:15.004125    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:03:15.016018    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:03:15.016030    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:03:15.036278    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:03:15.036289    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:03:15.048213    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:03:15.048224    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:03:15.072075    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:03:15.072083    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:03:15.076150    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:03:15.076156    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:03:15.092354    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:03:15.092368    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:03:15.112430    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:03:15.112443    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:03:15.124472    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:03:15.124482    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:03:15.163187    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:03:15.163201    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:15.199920    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:03:15.199931    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:03:17.716736    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:22.718975    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:03:22.719179    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:03:22.737225    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:03:22.737319    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:03:22.750330    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:03:22.750404    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:03:22.762028    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:03:22.762095    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:03:22.772088    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:03:22.772177    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:03:22.783249    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:03:22.783317    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:03:22.794025    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:03:22.794087    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:03:22.804744    8430 logs.go:276] 0 containers: []
	W0429 05:03:22.804755    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:03:22.804814    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:03:22.815288    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:03:22.815305    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:03:22.815310    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:03:22.829458    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:03:22.829468    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:03:22.840490    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:03:22.840499    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:03:22.855008    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:03:22.855023    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:03:22.865986    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:03:22.865995    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:03:22.888499    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:03:22.888507    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:22.924087    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:03:22.924098    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:03:22.935791    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:03:22.935800    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:03:22.950863    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:03:22.950877    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:03:22.964211    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:03:22.964223    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:03:22.983005    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:03:22.983016    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:03:22.994471    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:03:22.994481    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:03:23.031025    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:03:23.031034    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:03:23.035346    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:03:23.035353    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:03:23.049064    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:03:23.049074    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:03:23.074270    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:03:23.074281    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:03:23.092453    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:03:23.092467    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:03:25.608362    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:30.609118    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:03:30.609310    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:03:30.620699    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:03:30.620775    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:03:30.631304    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:03:30.631365    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:03:30.641779    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:03:30.641843    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:03:30.653090    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:03:30.653155    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:03:30.663315    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:03:30.663379    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:03:30.674194    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:03:30.674255    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:03:30.685049    8430 logs.go:276] 0 containers: []
	W0429 05:03:30.685059    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:03:30.685117    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:03:30.695884    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:03:30.695903    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:03:30.695909    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:03:30.707666    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:03:30.707678    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:03:30.724683    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:03:30.724694    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:03:30.736062    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:03:30.736072    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:03:30.747619    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:03:30.747629    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:03:30.784144    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:03:30.784153    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:30.819444    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:03:30.819455    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:03:30.833416    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:03:30.833425    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:03:30.848757    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:03:30.848771    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:03:30.873059    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:03:30.873070    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:03:30.887460    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:03:30.887472    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:03:30.898742    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:03:30.898753    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:03:30.921263    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:03:30.921271    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:03:30.932839    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:03:30.932854    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:03:30.936843    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:03:30.936850    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:03:30.950569    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:03:30.950579    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:03:30.961673    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:03:30.961685    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:03:33.478880    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:38.480042    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:03:38.480296    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:03:38.509538    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:03:38.509647    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:03:38.528563    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:03:38.528642    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:03:38.542494    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:03:38.542575    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:03:38.557381    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:03:38.557465    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:03:38.567503    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:03:38.567570    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:03:38.578752    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:03:38.578821    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:03:38.589293    8430 logs.go:276] 0 containers: []
	W0429 05:03:38.589304    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:03:38.589361    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:03:38.599361    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:03:38.599378    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:03:38.599385    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:38.660532    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:03:38.660547    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:03:38.674967    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:03:38.674978    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:03:38.689896    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:03:38.689907    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:03:38.704489    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:03:38.704501    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:03:38.708593    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:03:38.708600    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:03:38.739444    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:03:38.739455    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:03:38.761983    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:03:38.761997    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:03:38.773480    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:03:38.773494    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:03:38.795316    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:03:38.795323    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:03:38.809389    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:03:38.809401    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:03:38.820085    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:03:38.820097    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:03:38.832052    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:03:38.832063    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:03:38.844894    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:03:38.844905    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:03:38.856331    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:03:38.856342    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:03:38.893526    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:03:38.893535    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:03:38.905806    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:03:38.905817    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:03:41.423145    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:46.425782    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:03:46.425924    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:03:46.439791    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:03:46.439865    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:03:46.453171    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:03:46.453239    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:03:46.463968    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:03:46.464033    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:03:46.475061    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:03:46.475130    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:03:46.486009    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:03:46.486075    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:03:46.497164    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:03:46.497236    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:03:46.507287    8430 logs.go:276] 0 containers: []
	W0429 05:03:46.507298    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:03:46.507350    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:03:46.518239    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:03:46.518257    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:03:46.518262    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:03:46.532338    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:03:46.532354    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:03:46.544167    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:03:46.544181    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:03:46.559586    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:03:46.559597    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:03:46.571449    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:03:46.571461    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:03:46.582915    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:03:46.582928    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:03:46.595865    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:03:46.595876    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:03:46.610679    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:03:46.610690    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:03:46.622287    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:03:46.622299    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:03:46.637447    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:03:46.637459    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:03:46.674875    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:03:46.674885    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:46.710240    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:03:46.710251    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:03:46.734679    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:03:46.734689    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:03:46.748728    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:03:46.748738    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:03:46.760977    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:03:46.760987    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:03:46.781269    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:03:46.781281    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:03:46.785344    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:03:46.785351    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:03:49.311015    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:03:54.313821    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:03:54.314252    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:03:54.357117    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:03:54.357256    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:03:54.379015    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:03:54.379117    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:03:54.395841    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:03:54.395917    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:03:54.408259    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:03:54.408335    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:03:54.419012    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:03:54.419083    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:03:54.429783    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:03:54.429853    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:03:54.440880    8430 logs.go:276] 0 containers: []
	W0429 05:03:54.440892    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:03:54.440954    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:03:54.451366    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:03:54.451383    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:03:54.451389    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:03:54.469452    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:03:54.469461    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:03:54.484276    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:03:54.484288    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:03:54.519360    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:03:54.519371    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:03:54.534430    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:03:54.534441    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:03:54.545822    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:03:54.545837    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:03:54.557821    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:03:54.557832    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:03:54.576939    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:03:54.576948    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:03:54.588419    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:03:54.588430    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:03:54.610594    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:03:54.610603    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:03:54.624213    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:03:54.624223    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:03:54.628423    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:03:54.628430    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:03:54.654338    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:03:54.654350    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:03:54.666066    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:03:54.666079    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:03:54.681408    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:03:54.681419    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:03:54.693980    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:03:54.693990    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:03:54.731951    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:03:54.731958    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:03:57.245454    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:02.246048    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:02.246516    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:04:02.286761    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:04:02.286893    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:04:02.308799    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:04:02.308907    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:04:02.324112    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:04:02.324192    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:04:02.337520    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:04:02.337598    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:04:02.348181    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:04:02.348244    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:04:02.358666    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:04:02.358730    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:04:02.369585    8430 logs.go:276] 0 containers: []
	W0429 05:04:02.369600    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:04:02.369674    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:04:02.380567    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:04:02.380584    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:04:02.380590    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:04:02.416583    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:04:02.416591    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:04:02.452010    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:04:02.452022    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:04:02.467238    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:04:02.467248    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:04:02.483661    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:04:02.483674    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:04:02.495336    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:04:02.495348    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:04:02.499905    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:04:02.499912    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:04:02.511032    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:04:02.511045    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:04:02.522800    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:04:02.522810    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:04:02.534559    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:04:02.534577    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:04:02.550578    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:04:02.550589    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:04:02.562432    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:04:02.562442    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:04:02.580204    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:04:02.580214    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:04:02.602872    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:04:02.602884    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:04:02.617149    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:04:02.617162    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:04:02.641884    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:04:02.641894    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:04:02.656360    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:04:02.656371    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:04:05.170380    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:10.172687    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:10.172871    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:04:10.192820    8430 logs.go:276] 2 containers: [14ead7d448e3 354faa34ac46]
	I0429 05:04:10.192908    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:04:10.207519    8430 logs.go:276] 2 containers: [a0656ed4596d f4864e330600]
	I0429 05:04:10.207599    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:04:10.219771    8430 logs.go:276] 1 containers: [af04ef5dc786]
	I0429 05:04:10.219846    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:04:10.230682    8430 logs.go:276] 2 containers: [23e392d4e0a2 80cae0f8410a]
	I0429 05:04:10.230747    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:04:10.245700    8430 logs.go:276] 1 containers: [bdba81d6efb1]
	I0429 05:04:10.245768    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:04:10.259986    8430 logs.go:276] 2 containers: [39145a12f44f dbef1337b10c]
	I0429 05:04:10.260050    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:04:10.270346    8430 logs.go:276] 0 containers: []
	W0429 05:04:10.270357    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:04:10.270415    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:04:10.281183    8430 logs.go:276] 2 containers: [f294decaf68e 6698ac32109e]
	I0429 05:04:10.281200    8430 logs.go:123] Gathering logs for kube-apiserver [14ead7d448e3] ...
	I0429 05:04:10.281205    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ead7d448e3"
	I0429 05:04:10.295522    8430 logs.go:123] Gathering logs for coredns [af04ef5dc786] ...
	I0429 05:04:10.295536    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af04ef5dc786"
	I0429 05:04:10.309893    8430 logs.go:123] Gathering logs for kube-scheduler [80cae0f8410a] ...
	I0429 05:04:10.309905    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cae0f8410a"
	I0429 05:04:10.324571    8430 logs.go:123] Gathering logs for kube-controller-manager [39145a12f44f] ...
	I0429 05:04:10.324581    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39145a12f44f"
	I0429 05:04:10.342994    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:04:10.343004    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:04:10.355498    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:04:10.355509    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:04:10.365090    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:04:10.365097    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:04:10.404281    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:04:10.404292    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:04:10.426370    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:04:10.426377    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:04:10.466836    8430 logs.go:123] Gathering logs for kube-controller-manager [dbef1337b10c] ...
	I0429 05:04:10.466847    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbef1337b10c"
	I0429 05:04:10.481818    8430 logs.go:123] Gathering logs for storage-provisioner [f294decaf68e] ...
	I0429 05:04:10.481829    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f294decaf68e"
	I0429 05:04:10.494199    8430 logs.go:123] Gathering logs for storage-provisioner [6698ac32109e] ...
	I0429 05:04:10.494213    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6698ac32109e"
	I0429 05:04:10.506042    8430 logs.go:123] Gathering logs for kube-apiserver [354faa34ac46] ...
	I0429 05:04:10.506059    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 354faa34ac46"
	I0429 05:04:10.530324    8430 logs.go:123] Gathering logs for kube-scheduler [23e392d4e0a2] ...
	I0429 05:04:10.530335    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23e392d4e0a2"
	I0429 05:04:10.542210    8430 logs.go:123] Gathering logs for kube-proxy [bdba81d6efb1] ...
	I0429 05:04:10.542222    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdba81d6efb1"
	I0429 05:04:10.553728    8430 logs.go:123] Gathering logs for etcd [a0656ed4596d] ...
	I0429 05:04:10.553741    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0656ed4596d"
	I0429 05:04:10.571890    8430 logs.go:123] Gathering logs for etcd [f4864e330600] ...
	I0429 05:04:10.571907    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4864e330600"
	I0429 05:04:13.088848    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:18.091310    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:18.091380    8430 kubeadm.go:591] duration metric: took 4m3.607182375s to restartPrimaryControlPlane
	W0429 05:04:18.091443    8430 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 05:04:18.091471    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0429 05:04:19.182785    8430 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.091305833s)
	I0429 05:04:19.182854    8430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 05:04:19.188003    8430 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 05:04:19.190940    8430 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 05:04:19.193756    8430 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 05:04:19.193763    8430 kubeadm.go:156] found existing configuration files:
	
	I0429 05:04:19.193787    8430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/admin.conf
	I0429 05:04:19.196245    8430 kubeadm.go:162] "https://control-plane.minikube.internal:51384" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 05:04:19.196264    8430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 05:04:19.199291    8430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/kubelet.conf
	I0429 05:04:19.202229    8430 kubeadm.go:162] "https://control-plane.minikube.internal:51384" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 05:04:19.202246    8430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 05:04:19.204780    8430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/controller-manager.conf
	I0429 05:04:19.207405    8430 kubeadm.go:162] "https://control-plane.minikube.internal:51384" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 05:04:19.207430    8430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 05:04:19.210278    8430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/scheduler.conf
	I0429 05:04:19.212740    8430 kubeadm.go:162] "https://control-plane.minikube.internal:51384" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51384 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 05:04:19.213307    8430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 05:04:19.215880    8430 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 05:04:19.232859    8430 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0429 05:04:19.232893    8430 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 05:04:19.282442    8430 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 05:04:19.282502    8430 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 05:04:19.282543    8430 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 05:04:19.330829    8430 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 05:04:19.334197    8430 out.go:204]   - Generating certificates and keys ...
	I0429 05:04:19.334230    8430 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 05:04:19.334261    8430 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 05:04:19.334306    8430 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 05:04:19.334342    8430 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 05:04:19.334378    8430 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 05:04:19.334404    8430 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 05:04:19.334436    8430 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 05:04:19.334522    8430 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 05:04:19.334560    8430 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 05:04:19.334595    8430 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 05:04:19.334626    8430 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 05:04:19.334657    8430 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 05:04:19.482917    8430 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 05:04:19.521166    8430 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 05:04:19.673660    8430 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 05:04:20.009456    8430 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 05:04:20.040905    8430 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 05:04:20.041292    8430 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 05:04:20.041314    8430 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 05:04:20.112708    8430 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 05:04:20.116003    8430 out.go:204]   - Booting up control plane ...
	I0429 05:04:20.116052    8430 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 05:04:20.116143    8430 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 05:04:20.116360    8430 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 05:04:20.116419    8430 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 05:04:20.116527    8430 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 05:04:24.612586    8430 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501802 seconds
	I0429 05:04:24.612645    8430 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 05:04:24.616517    8430 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 05:04:25.135236    8430 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 05:04:25.135476    8430 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-383000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 05:04:25.639369    8430 kubeadm.go:309] [bootstrap-token] Using token: xmutsd.mvjfrqnk9xs5g1vn
	I0429 05:04:25.644519    8430 out.go:204]   - Configuring RBAC rules ...
	I0429 05:04:25.644581    8430 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 05:04:25.644627    8430 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 05:04:25.648844    8430 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 05:04:25.649752    8430 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 05:04:25.650562    8430 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 05:04:25.651359    8430 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 05:04:25.654526    8430 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 05:04:25.825235    8430 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 05:04:26.043030    8430 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 05:04:26.043548    8430 kubeadm.go:309] 
	I0429 05:04:26.043582    8430 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 05:04:26.043585    8430 kubeadm.go:309] 
	I0429 05:04:26.043622    8430 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 05:04:26.043625    8430 kubeadm.go:309] 
	I0429 05:04:26.043649    8430 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 05:04:26.043686    8430 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 05:04:26.043723    8430 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 05:04:26.043726    8430 kubeadm.go:309] 
	I0429 05:04:26.043753    8430 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 05:04:26.043755    8430 kubeadm.go:309] 
	I0429 05:04:26.043785    8430 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 05:04:26.043788    8430 kubeadm.go:309] 
	I0429 05:04:26.043814    8430 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 05:04:26.043850    8430 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 05:04:26.043889    8430 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 05:04:26.043892    8430 kubeadm.go:309] 
	I0429 05:04:26.043935    8430 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 05:04:26.043970    8430 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 05:04:26.043975    8430 kubeadm.go:309] 
	I0429 05:04:26.044023    8430 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token xmutsd.mvjfrqnk9xs5g1vn \
	I0429 05:04:26.044073    8430 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4832dc51ff6d0e6d2b485eb727ddc01b0033877744e5e13a6c0f8b67a1b7145 \
	I0429 05:04:26.044084    8430 kubeadm.go:309] 	--control-plane 
	I0429 05:04:26.044090    8430 kubeadm.go:309] 
	I0429 05:04:26.044138    8430 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 05:04:26.044144    8430 kubeadm.go:309] 
	I0429 05:04:26.044184    8430 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token xmutsd.mvjfrqnk9xs5g1vn \
	I0429 05:04:26.044234    8430 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4832dc51ff6d0e6d2b485eb727ddc01b0033877744e5e13a6c0f8b67a1b7145 
	I0429 05:04:26.044510    8430 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 05:04:26.044519    8430 cni.go:84] Creating CNI manager for ""
	I0429 05:04:26.044527    8430 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 05:04:26.048502    8430 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 05:04:26.054431    8430 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 05:04:26.057261    8430 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 05:04:26.061740    8430 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 05:04:26.061787    8430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 05:04:26.061811    8430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-383000 minikube.k8s.io/updated_at=2024_04_29T05_04_26_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=94a01df0d4b48636d4af3d06a53be687e06c0844 minikube.k8s.io/name=stopped-upgrade-383000 minikube.k8s.io/primary=true
	I0429 05:04:26.104489    8430 kubeadm.go:1107] duration metric: took 42.736083ms to wait for elevateKubeSystemPrivileges
	I0429 05:04:26.104505    8430 ops.go:34] apiserver oom_adj: -16
	W0429 05:04:26.104603    8430 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 05:04:26.104608    8430 kubeadm.go:393] duration metric: took 4m11.633794s to StartCluster
	I0429 05:04:26.104617    8430 settings.go:142] acquiring lock: {Name:mka93054a23bdbf29aca25affe181be869710883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:04:26.104747    8430 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:04:26.105165    8430 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/kubeconfig: {Name:mkc4105502c44b2331a2dd91226134a74ad93594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:04:26.105398    8430 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:04:26.108470    8430 out.go:177] * Verifying Kubernetes components...
	I0429 05:04:26.105405    8430 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 05:04:26.105479    8430 config.go:182] Loaded profile config "stopped-upgrade-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 05:04:26.116501    8430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 05:04:26.116515    8430 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-383000"
	I0429 05:04:26.116520    8430 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-383000"
	I0429 05:04:26.116528    8430 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-383000"
	W0429 05:04:26.116531    8430 addons.go:243] addon storage-provisioner should already be in state true
	I0429 05:04:26.116531    8430 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-383000"
	I0429 05:04:26.116542    8430 host.go:66] Checking if "stopped-upgrade-383000" exists ...
	I0429 05:04:26.117803    8430 kapi.go:59] client config for stopped-upgrade-383000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/stopped-upgrade-383000/client.key", CAFile:"/Users/jenkins/minikube-integration/18771-6092/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10184fcb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 05:04:26.117926    8430 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-383000"
	W0429 05:04:26.117931    8430 addons.go:243] addon default-storageclass should already be in state true
	I0429 05:04:26.117938    8430 host.go:66] Checking if "stopped-upgrade-383000" exists ...
	I0429 05:04:26.122445    8430 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 05:04:26.126510    8430 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 05:04:26.126517    8430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 05:04:26.126523    8430 sshutil.go:53] new ssh client: &{IP:localhost Port:51351 SSHKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/stopped-upgrade-383000/id_rsa Username:docker}
	I0429 05:04:26.127336    8430 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 05:04:26.127342    8430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 05:04:26.127346    8430 sshutil.go:53] new ssh client: &{IP:localhost Port:51351 SSHKeyPath:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/stopped-upgrade-383000/id_rsa Username:docker}
	I0429 05:04:26.194988    8430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 05:04:26.200012    8430 api_server.go:52] waiting for apiserver process to appear ...
	I0429 05:04:26.200054    8430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 05:04:26.204242    8430 api_server.go:72] duration metric: took 98.831083ms to wait for apiserver process to appear ...
	I0429 05:04:26.204249    8430 api_server.go:88] waiting for apiserver healthz status ...
	I0429 05:04:26.204256    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:26.212032    8430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 05:04:26.212462    8430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 05:04:31.206426    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:31.206471    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:36.206886    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:36.206928    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:41.207342    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:41.207374    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:46.207900    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:46.207941    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:51.208813    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:51.208859    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:04:56.209865    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:04:56.209909    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0429 05:04:56.572041    8430 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0429 05:04:56.575696    8430 out.go:177] * Enabled addons: storage-provisioner
	I0429 05:04:56.582587    8430 addons.go:505] duration metric: took 30.4772485s for enable addons: enabled=[storage-provisioner]
	I0429 05:05:01.211064    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:05:01.211109    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:05:06.212868    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:05:06.212914    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:05:11.214812    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:05:11.214844    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:05:16.215877    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:05:16.215901    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:05:21.218102    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:05:21.218138    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:05:26.219303    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:05:26.219450    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:05:26.236713    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:05:26.236788    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:05:26.251056    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:05:26.251122    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:05:26.261855    8430 logs.go:276] 2 containers: [4cdbcb53ff73 e1208a593cd5]
	I0429 05:05:26.261920    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:05:26.272662    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:05:26.272728    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:05:26.283011    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:05:26.283077    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:05:26.293071    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:05:26.293139    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:05:26.302689    8430 logs.go:276] 0 containers: []
	W0429 05:05:26.302705    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:05:26.302763    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:05:26.313320    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:05:26.313338    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:05:26.313343    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:05:26.349410    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:05:26.349422    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:05:26.354008    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:05:26.354014    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:05:26.366164    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:05:26.366179    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:05:26.377715    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:05:26.377726    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:05:26.394948    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:05:26.394959    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:05:26.406220    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:05:26.406232    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:05:26.431378    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:05:26.431389    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:05:26.466122    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:05:26.466136    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:05:26.480586    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:05:26.480597    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:05:26.502219    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:05:26.502230    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:05:26.522000    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:05:26.522011    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:05:26.535043    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:05:26.535054    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:05:29.048745    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:05:34.049193    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:05:34.049474    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:05:34.069031    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:05:34.069120    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:05:34.083411    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:05:34.083485    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:05:34.095934    8430 logs.go:276] 2 containers: [4cdbcb53ff73 e1208a593cd5]
	I0429 05:05:34.096004    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:05:34.106314    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:05:34.106384    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:05:34.117007    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:05:34.117087    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:05:34.126847    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:05:34.126917    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:05:34.136725    8430 logs.go:276] 0 containers: []
	W0429 05:05:34.136738    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:05:34.136795    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:05:34.146922    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:05:34.146937    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:05:34.146946    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:05:34.158141    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:05:34.158155    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:05:34.169441    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:05:34.169454    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:05:34.204044    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:05:34.204051    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:05:34.207968    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:05:34.207976    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:05:34.222206    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:05:34.222217    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:05:34.239272    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:05:34.239282    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:05:34.250757    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:05:34.250776    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:05:34.268244    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:05:34.268254    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:05:34.293101    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:05:34.293111    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:05:34.307524    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:05:34.307536    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:05:34.342981    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:05:34.342992    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:05:34.357749    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:05:34.357761    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:05:36.871513    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:05:41.872097    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:05:41.872286    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:05:41.889454    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:05:41.889536    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:05:41.905613    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:05:41.905688    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:05:41.916646    8430 logs.go:276] 2 containers: [4cdbcb53ff73 e1208a593cd5]
	I0429 05:05:41.916709    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:05:41.927197    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:05:41.927261    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:05:41.937423    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:05:41.937478    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:05:41.947536    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:05:41.947590    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:05:41.962150    8430 logs.go:276] 0 containers: []
	W0429 05:05:41.962161    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:05:41.962222    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:05:41.972499    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:05:41.972518    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:05:41.972524    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:05:41.986134    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:05:41.986146    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:05:41.997512    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:05:41.997525    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:05:42.014267    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:05:42.014276    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:05:42.037854    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:05:42.037860    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:05:42.049415    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:05:42.049429    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:05:42.060680    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:05:42.060693    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:05:42.094666    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:05:42.094673    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:05:42.098964    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:05:42.098972    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:05:42.133359    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:05:42.133369    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:05:42.147392    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:05:42.147401    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:05:42.158544    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:05:42.158555    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:05:42.173492    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:05:42.173503    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:05:44.687218    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:05:49.690014    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:05:49.690268    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:05:49.712697    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:05:49.712810    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:05:49.728671    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:05:49.728755    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:05:49.742328    8430 logs.go:276] 2 containers: [4cdbcb53ff73 e1208a593cd5]
	I0429 05:05:49.742393    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:05:49.752970    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:05:49.753028    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:05:49.763296    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:05:49.763363    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:05:49.774013    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:05:49.774082    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:05:49.783725    8430 logs.go:276] 0 containers: []
	W0429 05:05:49.783736    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:05:49.783788    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:05:49.793717    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:05:49.793731    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:05:49.793737    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:05:49.807927    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:05:49.807937    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:05:49.819184    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:05:49.819196    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:05:49.833794    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:05:49.833806    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:05:49.845238    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:05:49.845250    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:05:49.870311    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:05:49.870320    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:05:49.905530    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:05:49.905541    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:05:49.927637    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:05:49.927649    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:05:49.939659    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:05:49.939673    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:05:49.951133    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:05:49.951145    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:05:49.972238    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:05:49.972251    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:05:49.983522    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:05:49.983535    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:05:50.016935    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:05:50.016942    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:05:52.523239    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:05:57.524686    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:05:57.525020    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:05:57.557243    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:05:57.557370    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:05:57.576621    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:05:57.576710    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:05:57.589896    8430 logs.go:276] 2 containers: [4cdbcb53ff73 e1208a593cd5]
	I0429 05:05:57.589969    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:05:57.601593    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:05:57.601662    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:05:57.612653    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:05:57.612719    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:05:57.623544    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:05:57.623601    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:05:57.633684    8430 logs.go:276] 0 containers: []
	W0429 05:05:57.633695    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:05:57.633744    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:05:57.644146    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:05:57.644160    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:05:57.644165    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:05:57.658497    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:05:57.658509    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:05:57.682765    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:05:57.682780    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:05:57.719253    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:05:57.719268    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:05:57.734929    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:05:57.734945    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:05:57.749320    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:05:57.749330    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:05:57.760703    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:05:57.760710    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:05:57.776647    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:05:57.776657    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:05:57.792633    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:05:57.792642    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:05:57.804141    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:05:57.804151    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:05:57.821385    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:05:57.821398    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:05:57.826189    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:05:57.826198    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:05:57.860390    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:05:57.860399    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:06:00.374336    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:06:05.376846    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:06:05.377267    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:06:05.415771    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:06:05.415906    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:06:05.438280    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:06:05.438390    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:06:05.453379    8430 logs.go:276] 2 containers: [4cdbcb53ff73 e1208a593cd5]
	I0429 05:06:05.453444    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:06:05.466006    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:06:05.466077    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:06:05.476605    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:06:05.476675    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:06:05.487148    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:06:05.487206    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:06:05.497004    8430 logs.go:276] 0 containers: []
	W0429 05:06:05.497013    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:06:05.497060    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:06:05.506808    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:06:05.506824    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:06:05.506830    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:06:05.520881    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:06:05.520894    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:06:05.558786    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:06:05.558799    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:06:05.575071    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:06:05.575084    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:06:05.593599    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:06:05.593610    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:06:05.604793    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:06:05.604805    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:06:05.629705    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:06:05.629712    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:06:05.648964    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:06:05.648976    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:06:05.661415    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:06:05.661426    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:06:05.696216    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:06:05.696223    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:06:05.700062    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:06:05.700069    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:06:05.713571    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:06:05.713582    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:06:05.725149    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:06:05.725160    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:06:08.238781    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:06:13.241459    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:06:13.241875    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:06:13.281422    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:06:13.281560    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:06:13.304801    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:06:13.304916    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:06:13.319740    8430 logs.go:276] 2 containers: [4cdbcb53ff73 e1208a593cd5]
	I0429 05:06:13.319818    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:06:13.332228    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:06:13.332291    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:06:13.342448    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:06:13.342514    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:06:13.353300    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:06:13.353367    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:06:13.362864    8430 logs.go:276] 0 containers: []
	W0429 05:06:13.362877    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:06:13.362928    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:06:13.373243    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:06:13.373257    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:06:13.373262    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:06:13.390087    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:06:13.390101    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:06:13.413598    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:06:13.413605    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:06:13.425325    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:06:13.425337    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:06:13.459721    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:06:13.459730    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:06:13.463729    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:06:13.463735    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:06:13.497662    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:06:13.497673    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:06:13.512889    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:06:13.512902    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:06:13.524179    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:06:13.524187    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:06:13.535396    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:06:13.535408    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:06:13.549213    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:06:13.549224    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:06:13.562626    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:06:13.562637    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:06:13.574086    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:06:13.574097    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:06:16.093882    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:06:21.096666    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:06:21.097099    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:06:21.145066    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:06:21.145194    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:06:21.163439    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:06:21.163532    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:06:21.177590    8430 logs.go:276] 2 containers: [4cdbcb53ff73 e1208a593cd5]
	I0429 05:06:21.177661    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:06:21.189721    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:06:21.189785    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:06:21.200032    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:06:21.200099    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:06:21.211457    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:06:21.211519    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:06:21.221908    8430 logs.go:276] 0 containers: []
	W0429 05:06:21.221916    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:06:21.221962    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:06:21.240174    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:06:21.240191    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:06:21.240195    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:06:21.265069    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:06:21.265077    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:06:21.299604    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:06:21.299611    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:06:21.333630    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:06:21.333642    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:06:21.348255    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:06:21.348270    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:06:21.362130    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:06:21.362144    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:06:21.373106    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:06:21.373119    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:06:21.384382    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:06:21.384394    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:06:21.396006    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:06:21.396018    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:06:21.400214    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:06:21.400220    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:06:21.418947    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:06:21.418958    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:06:21.436472    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:06:21.436482    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:06:21.447563    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:06:21.447577    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:06:23.961485    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:06:28.963957    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:06:28.964156    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:06:28.976549    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:06:28.976625    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:06:28.986781    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:06:28.986843    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:06:28.996814    8430 logs.go:276] 2 containers: [4cdbcb53ff73 e1208a593cd5]
	I0429 05:06:28.996873    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:06:29.007369    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:06:29.007426    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:06:29.022031    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:06:29.022106    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:06:29.032555    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:06:29.032610    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:06:29.042323    8430 logs.go:276] 0 containers: []
	W0429 05:06:29.042340    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:06:29.042384    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:06:29.052616    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:06:29.052632    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:06:29.052639    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:06:29.056820    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:06:29.056828    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:06:29.071360    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:06:29.071372    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:06:29.086259    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:06:29.086270    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:06:29.110316    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:06:29.110324    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:06:29.143194    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:06:29.143204    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:06:29.177223    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:06:29.177236    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:06:29.191396    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:06:29.191405    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:06:29.202713    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:06:29.202724    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:06:29.218186    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:06:29.218200    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:06:29.229450    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:06:29.229461    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:06:29.246097    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:06:29.246107    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:06:29.258331    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:06:29.258344    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:06:31.772339    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:06:36.775014    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:06:36.775347    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:06:36.813404    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:06:36.813532    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:06:36.835414    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:06:36.835530    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:06:36.851090    8430 logs.go:276] 2 containers: [4cdbcb53ff73 e1208a593cd5]
	I0429 05:06:36.851159    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:06:36.865846    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:06:36.865918    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:06:36.876453    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:06:36.876531    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:06:36.886939    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:06:36.887005    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:06:36.897526    8430 logs.go:276] 0 containers: []
	W0429 05:06:36.897537    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:06:36.897593    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:06:36.908764    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:06:36.908779    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:06:36.908784    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:06:36.945828    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:06:36.945840    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:06:36.981601    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:06:36.981613    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:06:36.993500    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:06:36.993513    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:06:37.008112    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:06:37.008123    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:06:37.019172    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:06:37.019186    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:06:37.042859    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:06:37.042867    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:06:37.046896    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:06:37.046903    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:06:37.061117    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:06:37.061129    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:06:37.075817    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:06:37.075829    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:06:37.087177    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:06:37.087190    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:06:37.098966    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:06:37.098977    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:06:37.116065    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:06:37.116076    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:06:39.629716    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:06:44.632329    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:06:44.632523    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:06:44.651230    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:06:44.651306    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:06:44.664110    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:06:44.664166    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:06:44.675384    8430 logs.go:276] 4 containers: [2d1edc720274 1e82e9826537 4cdbcb53ff73 e1208a593cd5]
	I0429 05:06:44.675461    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:06:44.685565    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:06:44.685626    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:06:44.696931    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:06:44.696989    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:06:44.707215    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:06:44.707278    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:06:44.717132    8430 logs.go:276] 0 containers: []
	W0429 05:06:44.717142    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:06:44.717194    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:06:44.727573    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:06:44.727593    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:06:44.727598    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:06:44.739368    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:06:44.739386    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:06:44.752012    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:06:44.752024    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:06:44.769865    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:06:44.769875    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:06:44.793969    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:06:44.793978    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:06:44.798216    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:06:44.798224    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:06:44.812224    8430 logs.go:123] Gathering logs for coredns [2d1edc720274] ...
	I0429 05:06:44.812237    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d1edc720274"
	I0429 05:06:44.823558    8430 logs.go:123] Gathering logs for coredns [1e82e9826537] ...
	I0429 05:06:44.823568    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e82e9826537"
	I0429 05:06:44.834968    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:06:44.834981    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:06:44.846408    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:06:44.846418    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:06:44.862714    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:06:44.862725    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:06:44.875083    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:06:44.875096    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:06:44.886438    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:06:44.886452    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:06:44.920836    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:06:44.920846    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:06:44.955041    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:06:44.955053    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:06:47.472068    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:06:52.474958    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:06:52.475403    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:06:52.517004    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:06:52.517134    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:06:52.543065    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:06:52.543178    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:06:52.557136    8430 logs.go:276] 4 containers: [2d1edc720274 1e82e9826537 4cdbcb53ff73 e1208a593cd5]
	I0429 05:06:52.557206    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:06:52.568602    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:06:52.568667    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:06:52.579853    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:06:52.579920    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:06:52.590434    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:06:52.590497    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:06:52.600417    8430 logs.go:276] 0 containers: []
	W0429 05:06:52.600431    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:06:52.600485    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:06:52.611029    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:06:52.611043    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:06:52.611048    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:06:52.644555    8430 logs.go:123] Gathering logs for coredns [2d1edc720274] ...
	I0429 05:06:52.644567    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d1edc720274"
	I0429 05:06:52.657715    8430 logs.go:123] Gathering logs for coredns [1e82e9826537] ...
	I0429 05:06:52.657729    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e82e9826537"
	I0429 05:06:52.671245    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:06:52.671259    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:06:52.682787    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:06:52.682798    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:06:52.697951    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:06:52.697966    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:06:52.714712    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:06:52.714726    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:06:52.726330    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:06:52.726344    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:06:52.741515    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:06:52.741527    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:06:52.753104    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:06:52.753113    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:06:52.777457    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:06:52.777464    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:06:52.790965    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:06:52.790979    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:06:52.802990    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:06:52.803003    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:06:52.838132    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:06:52.838141    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:06:52.842457    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:06:52.842466    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:06:55.358218    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:07:00.358621    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:07:00.359055    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:07:00.399438    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:07:00.399570    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:07:00.423101    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:07:00.423198    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:07:00.438331    8430 logs.go:276] 4 containers: [2d1edc720274 1e82e9826537 4cdbcb53ff73 e1208a593cd5]
	I0429 05:07:00.438394    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:07:00.450733    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:07:00.450801    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:07:00.469813    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:07:00.469875    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:07:00.480302    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:07:00.480365    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:07:00.490880    8430 logs.go:276] 0 containers: []
	W0429 05:07:00.490896    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:07:00.490941    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:07:00.501945    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:07:00.501966    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:07:00.501972    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:07:00.513628    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:07:00.513639    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:07:00.538748    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:07:00.538758    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:07:00.557366    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:07:00.557379    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:07:00.591807    8430 logs.go:123] Gathering logs for coredns [1e82e9826537] ...
	I0429 05:07:00.591816    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e82e9826537"
	I0429 05:07:00.605941    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:07:00.605954    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:07:00.618060    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:07:00.618071    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:07:00.636310    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:07:00.636323    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:07:00.640411    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:07:00.640419    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:07:00.653015    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:07:00.653029    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:07:00.668704    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:07:00.668715    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:07:00.702138    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:07:00.702150    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:07:00.716588    8430 logs.go:123] Gathering logs for coredns [2d1edc720274] ...
	I0429 05:07:00.716599    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d1edc720274"
	I0429 05:07:00.728692    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:07:00.728705    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:07:00.740804    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:07:00.740817    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:07:03.256833    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:07:08.258959    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:07:08.259082    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:07:08.271498    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:07:08.271569    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:07:08.281647    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:07:08.281713    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:07:08.291662    8430 logs.go:276] 4 containers: [2d1edc720274 1e82e9826537 4cdbcb53ff73 e1208a593cd5]
	I0429 05:07:08.291726    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:07:08.301910    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:07:08.301975    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:07:08.311834    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:07:08.311893    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:07:08.322222    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:07:08.322278    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:07:08.332378    8430 logs.go:276] 0 containers: []
	W0429 05:07:08.332391    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:07:08.332444    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:07:08.342893    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:07:08.342908    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:07:08.342913    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:07:08.347330    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:07:08.347339    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:07:08.361099    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:07:08.361111    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:07:08.394234    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:07:08.394248    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:07:08.407869    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:07:08.407883    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:07:08.425406    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:07:08.425417    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:07:08.446699    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:07:08.446710    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:07:08.458352    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:07:08.458366    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:07:08.491805    8430 logs.go:123] Gathering logs for coredns [2d1edc720274] ...
	I0429 05:07:08.491812    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d1edc720274"
	I0429 05:07:08.504474    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:07:08.504484    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:07:08.519458    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:07:08.519469    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:07:08.533727    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:07:08.533738    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:07:08.548214    8430 logs.go:123] Gathering logs for coredns [1e82e9826537] ...
	I0429 05:07:08.548223    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e82e9826537"
	I0429 05:07:08.560911    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:07:08.560924    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:07:08.585017    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:07:08.585023    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:07:11.098458    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:07:16.101368    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:07:16.101728    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:07:16.134161    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:07:16.134285    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:07:16.154125    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:07:16.154230    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:07:16.168528    8430 logs.go:276] 4 containers: [2d1edc720274 1e82e9826537 4cdbcb53ff73 e1208a593cd5]
	I0429 05:07:16.168604    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:07:16.180821    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:07:16.180887    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:07:16.191776    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:07:16.191838    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:07:16.202598    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:07:16.202661    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:07:16.213284    8430 logs.go:276] 0 containers: []
	W0429 05:07:16.213296    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:07:16.213345    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:07:16.225374    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:07:16.225394    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:07:16.225399    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:07:16.260417    8430 logs.go:123] Gathering logs for coredns [1e82e9826537] ...
	I0429 05:07:16.260434    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e82e9826537"
	I0429 05:07:16.272402    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:07:16.272412    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:07:16.286052    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:07:16.286063    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:07:16.297812    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:07:16.297825    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:07:16.309001    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:07:16.309014    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:07:16.332418    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:07:16.332427    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:07:16.349836    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:07:16.349849    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:07:16.379801    8430 logs.go:123] Gathering logs for coredns [2d1edc720274] ...
	I0429 05:07:16.379814    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d1edc720274"
	I0429 05:07:16.391810    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:07:16.391822    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:07:16.403865    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:07:16.403877    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:07:16.438231    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:07:16.438246    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:07:16.452552    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:07:16.452564    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:07:16.456654    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:07:16.456663    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:07:16.471838    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:07:16.471847    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:07:18.991421    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:07:23.993711    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:07:23.994114    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:07:24.032612    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:07:24.032742    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:07:24.054599    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:07:24.054714    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:07:24.070135    8430 logs.go:276] 4 containers: [2d1edc720274 1e82e9826537 4cdbcb53ff73 e1208a593cd5]
	I0429 05:07:24.070202    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:07:24.082344    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:07:24.082411    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:07:24.093462    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:07:24.093533    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:07:24.103596    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:07:24.103656    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:07:24.113990    8430 logs.go:276] 0 containers: []
	W0429 05:07:24.114001    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:07:24.114063    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:07:24.123943    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:07:24.123960    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:07:24.123965    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:07:24.138017    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:07:24.138031    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:07:24.149514    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:07:24.149527    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:07:24.161283    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:07:24.161296    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:07:24.165373    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:07:24.165379    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:07:24.180055    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:07:24.180066    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:07:24.203692    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:07:24.203703    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:07:24.215315    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:07:24.215326    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:07:24.239855    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:07:24.239864    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:07:24.276634    8430 logs.go:123] Gathering logs for coredns [1e82e9826537] ...
	I0429 05:07:24.276644    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e82e9826537"
	I0429 05:07:24.289604    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:07:24.289617    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:07:24.324338    8430 logs.go:123] Gathering logs for coredns [2d1edc720274] ...
	I0429 05:07:24.324347    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d1edc720274"
	I0429 05:07:24.336327    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:07:24.336338    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:07:24.347925    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:07:24.347938    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:07:24.359426    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:07:24.359441    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:07:26.884289    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:07:31.884685    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:07:31.884757    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:07:31.897138    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:07:31.897225    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:07:31.911941    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:07:31.911993    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:07:31.922372    8430 logs.go:276] 4 containers: [2d1edc720274 1e82e9826537 4cdbcb53ff73 e1208a593cd5]
	I0429 05:07:31.922441    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:07:31.934727    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:07:31.934797    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:07:31.953628    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:07:31.953689    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:07:31.964486    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:07:31.964543    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:07:31.975615    8430 logs.go:276] 0 containers: []
	W0429 05:07:31.975629    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:07:31.975676    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:07:31.993583    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:07:31.993601    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:07:31.993606    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:07:32.008763    8430 logs.go:123] Gathering logs for coredns [1e82e9826537] ...
	I0429 05:07:32.008775    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e82e9826537"
	I0429 05:07:32.021959    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:07:32.021969    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:07:32.050056    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:07:32.050077    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:07:32.063115    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:07:32.063128    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:07:32.099699    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:07:32.099709    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:07:32.136948    8430 logs.go:123] Gathering logs for coredns [2d1edc720274] ...
	I0429 05:07:32.136965    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d1edc720274"
	I0429 05:07:32.151851    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:07:32.151862    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:07:32.164745    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:07:32.164757    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:07:32.169631    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:07:32.169641    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:07:32.185741    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:07:32.185750    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:07:32.200920    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:07:32.200930    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:07:32.215596    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:07:32.215605    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:07:32.230807    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:07:32.230815    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:07:32.248479    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:07:32.248490    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:07:34.762954    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:07:39.765139    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:07:39.765257    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:07:39.778126    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:07:39.778210    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:07:39.790745    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:07:39.790815    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:07:39.803630    8430 logs.go:276] 4 containers: [2d1edc720274 1e82e9826537 4cdbcb53ff73 e1208a593cd5]
	I0429 05:07:39.803706    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:07:39.819291    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:07:39.819351    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:07:39.833176    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:07:39.833245    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:07:39.845429    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:07:39.845494    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:07:39.859235    8430 logs.go:276] 0 containers: []
	W0429 05:07:39.859247    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:07:39.859303    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:07:39.871732    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:07:39.871753    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:07:39.871758    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:07:39.887884    8430 logs.go:123] Gathering logs for coredns [1e82e9826537] ...
	I0429 05:07:39.887898    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e82e9826537"
	I0429 05:07:39.901543    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:07:39.901560    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:07:39.914494    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:07:39.914506    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:07:39.926733    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:07:39.926746    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:07:39.939045    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:07:39.939056    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:07:39.954162    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:07:39.954174    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:07:39.968917    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:07:39.968925    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:07:39.985868    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:07:39.985878    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:07:40.003963    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:07:40.003972    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:07:40.028656    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:07:40.028665    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:07:40.063285    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:07:40.063293    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:07:40.067778    8430 logs.go:123] Gathering logs for coredns [2d1edc720274] ...
	I0429 05:07:40.067787    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d1edc720274"
	I0429 05:07:40.079340    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:07:40.079351    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:07:40.114424    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:07:40.114434    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:07:42.627909    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:07:47.630040    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:07:47.630403    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:07:47.661932    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:07:47.662048    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:07:47.681222    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:07:47.681312    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:07:47.695783    8430 logs.go:276] 4 containers: [2d1edc720274 1e82e9826537 4cdbcb53ff73 e1208a593cd5]
	I0429 05:07:47.695854    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:07:47.707355    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:07:47.707412    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:07:47.717964    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:07:47.718023    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:07:47.728266    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:07:47.728336    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:07:47.738325    8430 logs.go:276] 0 containers: []
	W0429 05:07:47.738336    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:07:47.738389    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:07:47.751996    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:07:47.752011    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:07:47.752016    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:07:47.786649    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:07:47.786660    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:07:47.800733    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:07:47.800746    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:07:47.812037    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:07:47.812052    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:07:47.845537    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:07:47.845546    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:07:47.862196    8430 logs.go:123] Gathering logs for coredns [1e82e9826537] ...
	I0429 05:07:47.862208    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e82e9826537"
	I0429 05:07:47.874841    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:07:47.874856    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:07:47.889372    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:07:47.889381    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:07:47.906475    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:07:47.906487    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:07:47.917910    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:07:47.917923    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:07:47.930008    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:07:47.930023    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:07:47.933959    8430 logs.go:123] Gathering logs for coredns [2d1edc720274] ...
	I0429 05:07:47.933968    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d1edc720274"
	I0429 05:07:47.945197    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:07:47.945208    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:07:47.956807    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:07:47.956818    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:07:47.968740    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:07:47.968753    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:07:50.495068    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:07:55.496998    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:07:55.497098    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:07:55.508846    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:07:55.508894    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:07:55.521021    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:07:55.521085    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:07:55.532940    8430 logs.go:276] 4 containers: [2d1edc720274 1e82e9826537 4cdbcb53ff73 e1208a593cd5]
	I0429 05:07:55.533008    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:07:55.545831    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:07:55.545887    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:07:55.556614    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:07:55.556669    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:07:55.567227    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:07:55.567291    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:07:55.578788    8430 logs.go:276] 0 containers: []
	W0429 05:07:55.578799    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:07:55.578841    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:07:55.590863    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:07:55.590882    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:07:55.590889    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:07:55.635998    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:07:55.636010    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:07:55.652147    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:07:55.652160    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:07:55.656758    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:07:55.656771    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:07:55.672820    8430 logs.go:123] Gathering logs for coredns [1e82e9826537] ...
	I0429 05:07:55.672835    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e82e9826537"
	I0429 05:07:55.684960    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:07:55.684969    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:07:55.698555    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:07:55.698567    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:07:55.714803    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:07:55.714814    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:07:55.753272    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:07:55.753285    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:07:55.770000    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:07:55.770011    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:07:55.782024    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:07:55.782033    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:07:55.795709    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:07:55.795719    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:07:55.814555    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:07:55.814565    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:07:55.839100    8430 logs.go:123] Gathering logs for coredns [2d1edc720274] ...
	I0429 05:07:55.839120    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d1edc720274"
	I0429 05:07:55.862830    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:07:55.862844    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:07:58.375108    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:08:03.377982    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:08:03.378219    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:08:03.411584    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:08:03.411759    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:08:03.441696    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:08:03.441801    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:08:03.456683    8430 logs.go:276] 4 containers: [2d1edc720274 1e82e9826537 4cdbcb53ff73 e1208a593cd5]
	I0429 05:08:03.456766    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:08:03.468653    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:08:03.468712    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:08:03.479180    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:08:03.479248    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:08:03.489484    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:08:03.489558    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:08:03.500042    8430 logs.go:276] 0 containers: []
	W0429 05:08:03.500052    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:08:03.500101    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:08:03.509794    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:08:03.509812    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:08:03.509828    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:08:03.543243    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:08:03.543257    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:08:03.559707    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:08:03.559720    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:08:03.571271    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:08:03.571285    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:08:03.594778    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:08:03.594787    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:08:03.606592    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:08:03.606603    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:08:03.639833    8430 logs.go:123] Gathering logs for coredns [2d1edc720274] ...
	I0429 05:08:03.639841    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d1edc720274"
	I0429 05:08:03.651183    8430 logs.go:123] Gathering logs for coredns [1e82e9826537] ...
	I0429 05:08:03.651197    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e82e9826537"
	I0429 05:08:03.662258    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:08:03.662270    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:08:03.676661    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:08:03.676676    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:08:03.693954    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:08:03.693963    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:08:03.698293    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:08:03.698302    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:08:03.720385    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:08:03.720397    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:08:03.742728    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:08:03.742741    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:08:03.753969    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:08:03.753981    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:08:06.265791    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:08:11.266907    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:08:11.267266    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:08:11.302316    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:08:11.302455    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:08:11.322956    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:08:11.323058    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:08:11.338260    8430 logs.go:276] 4 containers: [2d1edc720274 1e82e9826537 4cdbcb53ff73 e1208a593cd5]
	I0429 05:08:11.338329    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:08:11.350856    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:08:11.350930    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:08:11.362188    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:08:11.362256    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:08:11.372644    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:08:11.372711    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:08:11.385792    8430 logs.go:276] 0 containers: []
	W0429 05:08:11.385802    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:08:11.385854    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:08:11.396645    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:08:11.396664    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:08:11.396669    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:08:11.431111    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:08:11.431126    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:08:11.445660    8430 logs.go:123] Gathering logs for coredns [2d1edc720274] ...
	I0429 05:08:11.445674    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d1edc720274"
	I0429 05:08:11.457290    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:08:11.457301    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:08:11.470901    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:08:11.470915    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:08:11.485592    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:08:11.485603    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:08:11.497059    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:08:11.497069    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:08:11.531181    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:08:11.531190    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:08:11.548855    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:08:11.548868    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:08:11.560606    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:08:11.560618    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:08:11.564981    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:08:11.564987    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:08:11.579324    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:08:11.579334    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:08:11.591249    8430 logs.go:123] Gathering logs for coredns [1e82e9826537] ...
	I0429 05:08:11.591260    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e82e9826537"
	I0429 05:08:11.603464    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:08:11.603478    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:08:11.615169    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:08:11.615183    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:08:14.142629    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:08:19.143334    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:08:19.143418    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0429 05:08:19.155205    8430 logs.go:276] 1 containers: [9886bd2055a0]
	I0429 05:08:19.155272    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0429 05:08:19.170329    8430 logs.go:276] 1 containers: [1192532268ff]
	I0429 05:08:19.170397    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0429 05:08:19.181819    8430 logs.go:276] 4 containers: [2d1edc720274 1e82e9826537 4cdbcb53ff73 e1208a593cd5]
	I0429 05:08:19.181894    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0429 05:08:19.198095    8430 logs.go:276] 1 containers: [653138b66261]
	I0429 05:08:19.198151    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0429 05:08:19.209199    8430 logs.go:276] 1 containers: [87129d8c2826]
	I0429 05:08:19.209271    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0429 05:08:19.220455    8430 logs.go:276] 1 containers: [be491a1f08d0]
	I0429 05:08:19.220519    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0429 05:08:19.231757    8430 logs.go:276] 0 containers: []
	W0429 05:08:19.231767    8430 logs.go:278] No container was found matching "kindnet"
	I0429 05:08:19.231818    8430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0429 05:08:19.243436    8430 logs.go:276] 1 containers: [bb3c72f3e004]
	I0429 05:08:19.243456    8430 logs.go:123] Gathering logs for dmesg ...
	I0429 05:08:19.243463    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 05:08:19.248883    8430 logs.go:123] Gathering logs for describe nodes ...
	I0429 05:08:19.248894    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 05:08:19.285595    8430 logs.go:123] Gathering logs for coredns [e1208a593cd5] ...
	I0429 05:08:19.285606    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1208a593cd5"
	I0429 05:08:19.298921    8430 logs.go:123] Gathering logs for kube-proxy [87129d8c2826] ...
	I0429 05:08:19.298937    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87129d8c2826"
	I0429 05:08:19.312668    8430 logs.go:123] Gathering logs for container status ...
	I0429 05:08:19.312679    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 05:08:19.327078    8430 logs.go:123] Gathering logs for kube-apiserver [9886bd2055a0] ...
	I0429 05:08:19.327091    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9886bd2055a0"
	I0429 05:08:19.347433    8430 logs.go:123] Gathering logs for coredns [2d1edc720274] ...
	I0429 05:08:19.347449    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d1edc720274"
	I0429 05:08:19.360458    8430 logs.go:123] Gathering logs for etcd [1192532268ff] ...
	I0429 05:08:19.360466    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1192532268ff"
	I0429 05:08:19.375021    8430 logs.go:123] Gathering logs for coredns [1e82e9826537] ...
	I0429 05:08:19.375036    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e82e9826537"
	I0429 05:08:19.387865    8430 logs.go:123] Gathering logs for coredns [4cdbcb53ff73] ...
	I0429 05:08:19.387876    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cdbcb53ff73"
	I0429 05:08:19.402271    8430 logs.go:123] Gathering logs for kube-scheduler [653138b66261] ...
	I0429 05:08:19.402281    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653138b66261"
	I0429 05:08:19.417399    8430 logs.go:123] Gathering logs for kube-controller-manager [be491a1f08d0] ...
	I0429 05:08:19.417408    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be491a1f08d0"
	I0429 05:08:19.436564    8430 logs.go:123] Gathering logs for storage-provisioner [bb3c72f3e004] ...
	I0429 05:08:19.436577    8430 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb3c72f3e004"
	I0429 05:08:19.449572    8430 logs.go:123] Gathering logs for Docker ...
	I0429 05:08:19.449582    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0429 05:08:19.472982    8430 logs.go:123] Gathering logs for kubelet ...
	I0429 05:08:19.473000    8430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 05:08:22.010583    8430 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0429 05:08:27.013396    8430 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 05:08:27.020615    8430 out.go:177] 
	W0429 05:08:27.024581    8430 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0429 05:08:27.024604    8430 out.go:239] * 
	* 
	W0429 05:08:27.026468    8430 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:08:27.035553    8430 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-383000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (580.42s)

                                                
                                    
x
+
TestPause/serial/Start (10.17s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-938000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-938000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.125615042s)

                                                
                                                
-- stdout --
	* [pause-938000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-938000" primary control-plane node in "pause-938000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-938000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-938000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-938000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-938000 -n pause-938000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-938000 -n pause-938000: exit status 7 (45.360292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-938000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-358000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-358000 --driver=qemu2 : exit status 80 (9.716415708s)

                                                
                                                
-- stdout --
	* [NoKubernetes-358000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-358000" primary control-plane node in "NoKubernetes-358000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-358000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-358000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-358000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-358000 -n NoKubernetes-358000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-358000 -n NoKubernetes-358000: exit status 7 (49.188667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-358000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-358000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-358000 --no-kubernetes --driver=qemu2 : exit status 80 (5.233406292s)

                                                
                                                
-- stdout --
	* [NoKubernetes-358000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-358000
	* Restarting existing qemu2 VM for "NoKubernetes-358000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-358000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-358000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-358000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-358000 -n NoKubernetes-358000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-358000 -n NoKubernetes-358000: exit status 7 (34.504417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-358000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-358000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-358000 --no-kubernetes --driver=qemu2 : exit status 80 (5.251473125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-358000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-358000
	* Restarting existing qemu2 VM for "NoKubernetes-358000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-358000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-358000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-358000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-358000 -n NoKubernetes-358000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-358000 -n NoKubernetes-358000: exit status 7 (62.396125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-358000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-358000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-358000 --driver=qemu2 : exit status 80 (5.258278s)

                                                
                                                
-- stdout --
	* [NoKubernetes-358000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-358000
	* Restarting existing qemu2 VM for "NoKubernetes-358000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-358000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-358000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-358000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-358000 -n NoKubernetes-358000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-358000 -n NoKubernetes-358000: exit status 7 (68.154125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-358000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-413000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-413000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.822547875s)

                                                
                                                
-- stdout --
	* [kindnet-413000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-413000" primary control-plane node in "kindnet-413000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-413000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:06:43.423083    8659 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:06:43.423261    8659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:06:43.423264    8659 out.go:304] Setting ErrFile to fd 2...
	I0429 05:06:43.423266    8659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:06:43.423406    8659 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:06:43.424501    8659 out.go:298] Setting JSON to false
	I0429 05:06:43.440776    8659 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5774,"bootTime":1714386629,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 05:06:43.440853    8659 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:06:43.445909    8659 out.go:177] * [kindnet-413000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 05:06:43.453854    8659 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 05:06:43.457869    8659 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:06:43.453901    8659 notify.go:220] Checking for updates...
	I0429 05:06:43.463820    8659 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 05:06:43.466870    8659 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:06:43.469837    8659 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 05:06:43.472837    8659 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:06:43.476214    8659 config.go:182] Loaded profile config "multinode-368000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:06:43.476292    8659 config.go:182] Loaded profile config "stopped-upgrade-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 05:06:43.476334    8659 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:06:43.480823    8659 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 05:06:43.487807    8659 start.go:297] selected driver: qemu2
	I0429 05:06:43.487814    8659 start.go:901] validating driver "qemu2" against <nil>
	I0429 05:06:43.487820    8659 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:06:43.490293    8659 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 05:06:43.492870    8659 out.go:177] * Automatically selected the socket_vmnet network
	I0429 05:06:43.495922    8659 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 05:06:43.495967    8659 cni.go:84] Creating CNI manager for "kindnet"
	I0429 05:06:43.495972    8659 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 05:06:43.496000    8659 start.go:340] cluster config:
	{Name:kindnet-413000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kindnet-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:06:43.500604    8659 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:06:43.508890    8659 out.go:177] * Starting "kindnet-413000" primary control-plane node in "kindnet-413000" cluster
	I0429 05:06:43.512853    8659 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:06:43.512871    8659 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 05:06:43.512879    8659 cache.go:56] Caching tarball of preloaded images
	I0429 05:06:43.512950    8659 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 05:06:43.512955    8659 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 05:06:43.513036    8659 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/kindnet-413000/config.json ...
	I0429 05:06:43.513048    8659 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/kindnet-413000/config.json: {Name:mkeabdf663b07f50d12ab100b456aeecbf3febce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:06:43.513477    8659 start.go:360] acquireMachinesLock for kindnet-413000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:06:43.513510    8659 start.go:364] duration metric: took 27.25µs to acquireMachinesLock for "kindnet-413000"
	I0429 05:06:43.513521    8659 start.go:93] Provisioning new machine with config: &{Name:kindnet-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:kindnet-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:06:43.513548    8659 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:06:43.521844    8659 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0429 05:06:43.537333    8659 start.go:159] libmachine.API.Create for "kindnet-413000" (driver="qemu2")
	I0429 05:06:43.537360    8659 client.go:168] LocalClient.Create starting
	I0429 05:06:43.537423    8659 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:06:43.537458    8659 main.go:141] libmachine: Decoding PEM data...
	I0429 05:06:43.537469    8659 main.go:141] libmachine: Parsing certificate...
	I0429 05:06:43.537508    8659 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:06:43.537530    8659 main.go:141] libmachine: Decoding PEM data...
	I0429 05:06:43.537538    8659 main.go:141] libmachine: Parsing certificate...
	I0429 05:06:43.538022    8659 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:06:43.686076    8659 main.go:141] libmachine: Creating SSH key...
	I0429 05:06:43.774798    8659 main.go:141] libmachine: Creating Disk image...
	I0429 05:06:43.774805    8659 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:06:43.775007    8659 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kindnet-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kindnet-413000/disk.qcow2
	I0429 05:06:43.787469    8659 main.go:141] libmachine: STDOUT: 
	I0429 05:06:43.787490    8659 main.go:141] libmachine: STDERR: 
	I0429 05:06:43.787553    8659 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kindnet-413000/disk.qcow2 +20000M
	I0429 05:06:43.799008    8659 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:06:43.799033    8659 main.go:141] libmachine: STDERR: 
	I0429 05:06:43.799053    8659 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kindnet-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kindnet-413000/disk.qcow2
	I0429 05:06:43.799065    8659 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:06:43.799097    8659 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kindnet-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kindnet-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kindnet-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:a0:31:b4:44:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kindnet-413000/disk.qcow2
	I0429 05:06:43.800872    8659 main.go:141] libmachine: STDOUT: 
	I0429 05:06:43.800888    8659 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:06:43.800909    8659 client.go:171] duration metric: took 263.544667ms to LocalClient.Create
	I0429 05:06:45.803131    8659 start.go:128] duration metric: took 2.28955425s to createHost
	I0429 05:06:45.803218    8659 start.go:83] releasing machines lock for "kindnet-413000", held for 2.289702959s
	W0429 05:06:45.803365    8659 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:06:45.810721    8659 out.go:177] * Deleting "kindnet-413000" in qemu2 ...
	W0429 05:06:45.841296    8659 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:06:45.841345    8659 start.go:728] Will try again in 5 seconds ...
	I0429 05:06:50.843628    8659 start.go:360] acquireMachinesLock for kindnet-413000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:06:50.844280    8659 start.go:364] duration metric: took 515.375µs to acquireMachinesLock for "kindnet-413000"
	I0429 05:06:50.844421    8659 start.go:93] Provisioning new machine with config: &{Name:kindnet-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:kindnet-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:06:50.844822    8659 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:06:50.849673    8659 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0429 05:06:50.895347    8659 start.go:159] libmachine.API.Create for "kindnet-413000" (driver="qemu2")
	I0429 05:06:50.895392    8659 client.go:168] LocalClient.Create starting
	I0429 05:06:50.895507    8659 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:06:50.895577    8659 main.go:141] libmachine: Decoding PEM data...
	I0429 05:06:50.895592    8659 main.go:141] libmachine: Parsing certificate...
	I0429 05:06:50.895650    8659 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:06:50.895687    8659 main.go:141] libmachine: Decoding PEM data...
	I0429 05:06:50.895701    8659 main.go:141] libmachine: Parsing certificate...
	I0429 05:06:50.896204    8659 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:06:51.051555    8659 main.go:141] libmachine: Creating SSH key...
	I0429 05:06:51.149833    8659 main.go:141] libmachine: Creating Disk image...
	I0429 05:06:51.149842    8659 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:06:51.150034    8659 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kindnet-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kindnet-413000/disk.qcow2
	I0429 05:06:51.162748    8659 main.go:141] libmachine: STDOUT: 
	I0429 05:06:51.162773    8659 main.go:141] libmachine: STDERR: 
	I0429 05:06:51.162835    8659 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kindnet-413000/disk.qcow2 +20000M
	I0429 05:06:51.173665    8659 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:06:51.173696    8659 main.go:141] libmachine: STDERR: 
	I0429 05:06:51.173708    8659 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kindnet-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kindnet-413000/disk.qcow2
	I0429 05:06:51.173723    8659 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:06:51.173761    8659 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kindnet-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kindnet-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kindnet-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:b4:fe:08:36:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kindnet-413000/disk.qcow2
	I0429 05:06:51.175493    8659 main.go:141] libmachine: STDOUT: 
	I0429 05:06:51.175519    8659 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:06:51.175532    8659 client.go:171] duration metric: took 280.136375ms to LocalClient.Create
	I0429 05:06:53.177651    8659 start.go:128] duration metric: took 2.332813541s to createHost
	I0429 05:06:53.177693    8659 start.go:83] releasing machines lock for "kindnet-413000", held for 2.333398708s
	W0429 05:06:53.177876    8659 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:06:53.191074    8659 out.go:177] 
	W0429 05:06:53.194150    8659 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:06:53.194179    8659 out.go:239] * 
	* 
	W0429 05:06:53.195099    8659 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:06:53.207083    8659 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-413000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-413000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.859004125s)

                                                
                                                
-- stdout --
	* [auto-413000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-413000" primary control-plane node in "auto-413000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-413000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:06:55.699390    8773 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:06:55.699522    8773 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:06:55.699525    8773 out.go:304] Setting ErrFile to fd 2...
	I0429 05:06:55.699527    8773 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:06:55.699649    8773 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:06:55.700757    8773 out.go:298] Setting JSON to false
	I0429 05:06:55.717014    8773 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5786,"bootTime":1714386629,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 05:06:55.717083    8773 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:06:55.722575    8773 out.go:177] * [auto-413000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 05:06:55.730589    8773 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 05:06:55.734475    8773 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:06:55.730655    8773 notify.go:220] Checking for updates...
	I0429 05:06:55.740537    8773 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 05:06:55.743587    8773 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:06:55.746486    8773 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 05:06:55.749541    8773 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:06:55.752794    8773 config.go:182] Loaded profile config "multinode-368000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:06:55.752865    8773 config.go:182] Loaded profile config "stopped-upgrade-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 05:06:55.752915    8773 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:06:55.757567    8773 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 05:06:55.763504    8773 start.go:297] selected driver: qemu2
	I0429 05:06:55.763514    8773 start.go:901] validating driver "qemu2" against <nil>
	I0429 05:06:55.763521    8773 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:06:55.765773    8773 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 05:06:55.768576    8773 out.go:177] * Automatically selected the socket_vmnet network
	I0429 05:06:55.771605    8773 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 05:06:55.771633    8773 cni.go:84] Creating CNI manager for ""
	I0429 05:06:55.771641    8773 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 05:06:55.771645    8773 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 05:06:55.771677    8773 start.go:340] cluster config:
	{Name:auto-413000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:auto-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:06:55.776351    8773 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:06:55.784503    8773 out.go:177] * Starting "auto-413000" primary control-plane node in "auto-413000" cluster
	I0429 05:06:55.788616    8773 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:06:55.788636    8773 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 05:06:55.788647    8773 cache.go:56] Caching tarball of preloaded images
	I0429 05:06:55.788721    8773 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 05:06:55.788734    8773 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 05:06:55.788781    8773 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/auto-413000/config.json ...
	I0429 05:06:55.788792    8773 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/auto-413000/config.json: {Name:mk035d737a680c3b95ae5faac8a5b0aa3940abd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:06:55.789025    8773 start.go:360] acquireMachinesLock for auto-413000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:06:55.789057    8773 start.go:364] duration metric: took 26.375µs to acquireMachinesLock for "auto-413000"
	I0429 05:06:55.789068    8773 start.go:93] Provisioning new machine with config: &{Name:auto-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:auto-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:06:55.789113    8773 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:06:55.797554    8773 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0429 05:06:55.814176    8773 start.go:159] libmachine.API.Create for "auto-413000" (driver="qemu2")
	I0429 05:06:55.814199    8773 client.go:168] LocalClient.Create starting
	I0429 05:06:55.814256    8773 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:06:55.814286    8773 main.go:141] libmachine: Decoding PEM data...
	I0429 05:06:55.814293    8773 main.go:141] libmachine: Parsing certificate...
	I0429 05:06:55.814349    8773 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:06:55.814371    8773 main.go:141] libmachine: Decoding PEM data...
	I0429 05:06:55.814378    8773 main.go:141] libmachine: Parsing certificate...
	I0429 05:06:55.814800    8773 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:06:55.959708    8773 main.go:141] libmachine: Creating SSH key...
	I0429 05:06:56.151362    8773 main.go:141] libmachine: Creating Disk image...
	I0429 05:06:56.151371    8773 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:06:56.151596    8773 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/auto-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/auto-413000/disk.qcow2
	I0429 05:06:56.164544    8773 main.go:141] libmachine: STDOUT: 
	I0429 05:06:56.164575    8773 main.go:141] libmachine: STDERR: 
	I0429 05:06:56.164625    8773 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/auto-413000/disk.qcow2 +20000M
	I0429 05:06:56.175528    8773 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:06:56.175546    8773 main.go:141] libmachine: STDERR: 
	I0429 05:06:56.175569    8773 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/auto-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/auto-413000/disk.qcow2
	I0429 05:06:56.175574    8773 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:06:56.175603    8773 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/auto-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/auto-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/auto-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:54:af:0f:a3:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/auto-413000/disk.qcow2
	I0429 05:06:56.177254    8773 main.go:141] libmachine: STDOUT: 
	I0429 05:06:56.177271    8773 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:06:56.177291    8773 client.go:171] duration metric: took 363.088292ms to LocalClient.Create
	I0429 05:06:58.179506    8773 start.go:128] duration metric: took 2.390363792s to createHost
	I0429 05:06:58.179580    8773 start.go:83] releasing machines lock for "auto-413000", held for 2.390518875s
	W0429 05:06:58.179686    8773 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:06:58.196057    8773 out.go:177] * Deleting "auto-413000" in qemu2 ...
	W0429 05:06:58.223302    8773 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:06:58.223342    8773 start.go:728] Will try again in 5 seconds ...
	I0429 05:07:03.225416    8773 start.go:360] acquireMachinesLock for auto-413000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:07:03.225659    8773 start.go:364] duration metric: took 193.917µs to acquireMachinesLock for "auto-413000"
	I0429 05:07:03.225706    8773 start.go:93] Provisioning new machine with config: &{Name:auto-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:auto-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:07:03.225775    8773 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:07:03.233940    8773 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0429 05:07:03.254925    8773 start.go:159] libmachine.API.Create for "auto-413000" (driver="qemu2")
	I0429 05:07:03.254956    8773 client.go:168] LocalClient.Create starting
	I0429 05:07:03.255028    8773 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:07:03.255062    8773 main.go:141] libmachine: Decoding PEM data...
	I0429 05:07:03.255072    8773 main.go:141] libmachine: Parsing certificate...
	I0429 05:07:03.255113    8773 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:07:03.255140    8773 main.go:141] libmachine: Decoding PEM data...
	I0429 05:07:03.255146    8773 main.go:141] libmachine: Parsing certificate...
	I0429 05:07:03.255526    8773 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:07:03.411795    8773 main.go:141] libmachine: Creating SSH key...
	I0429 05:07:03.465323    8773 main.go:141] libmachine: Creating Disk image...
	I0429 05:07:03.465333    8773 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:07:03.465516    8773 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/auto-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/auto-413000/disk.qcow2
	I0429 05:07:03.478167    8773 main.go:141] libmachine: STDOUT: 
	I0429 05:07:03.478190    8773 main.go:141] libmachine: STDERR: 
	I0429 05:07:03.478251    8773 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/auto-413000/disk.qcow2 +20000M
	I0429 05:07:03.489506    8773 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:07:03.489534    8773 main.go:141] libmachine: STDERR: 
	I0429 05:07:03.489554    8773 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/auto-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/auto-413000/disk.qcow2
	I0429 05:07:03.489559    8773 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:07:03.489593    8773 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/auto-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/auto-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/auto-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:b9:15:1a:d0:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/auto-413000/disk.qcow2
	I0429 05:07:03.491482    8773 main.go:141] libmachine: STDOUT: 
	I0429 05:07:03.491498    8773 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:07:03.491509    8773 client.go:171] duration metric: took 236.548208ms to LocalClient.Create
	I0429 05:07:05.493622    8773 start.go:128] duration metric: took 2.267836458s to createHost
	I0429 05:07:05.493665    8773 start.go:83] releasing machines lock for "auto-413000", held for 2.26799375s
	W0429 05:07:05.493887    8773 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:07:05.502232    8773 out.go:177] 
	W0429 05:07:05.510323    8773 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:07:05.510350    8773 out.go:239] * 
	* 
	W0429 05:07:05.511427    8773 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:07:05.519354    8773 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-413000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-413000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.827487541s)

                                                
                                                
-- stdout --
	* [flannel-413000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-413000" primary control-plane node in "flannel-413000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-413000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:07:07.798824    8886 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:07:07.798969    8886 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:07:07.798972    8886 out.go:304] Setting ErrFile to fd 2...
	I0429 05:07:07.798975    8886 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:07:07.799109    8886 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:07:07.800216    8886 out.go:298] Setting JSON to false
	I0429 05:07:07.816371    8886 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5798,"bootTime":1714386629,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 05:07:07.816439    8886 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:07:07.822390    8886 out.go:177] * [flannel-413000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 05:07:07.830397    8886 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 05:07:07.834417    8886 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:07:07.830493    8886 notify.go:220] Checking for updates...
	I0429 05:07:07.840316    8886 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 05:07:07.843411    8886 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:07:07.846349    8886 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 05:07:07.849358    8886 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:07:07.852694    8886 config.go:182] Loaded profile config "multinode-368000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:07:07.852759    8886 config.go:182] Loaded profile config "stopped-upgrade-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 05:07:07.852804    8886 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:07:07.857387    8886 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 05:07:07.864402    8886 start.go:297] selected driver: qemu2
	I0429 05:07:07.864411    8886 start.go:901] validating driver "qemu2" against <nil>
	I0429 05:07:07.864419    8886 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:07:07.866618    8886 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 05:07:07.869445    8886 out.go:177] * Automatically selected the socket_vmnet network
	I0429 05:07:07.872422    8886 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 05:07:07.872458    8886 cni.go:84] Creating CNI manager for "flannel"
	I0429 05:07:07.872462    8886 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0429 05:07:07.872490    8886 start.go:340] cluster config:
	{Name:flannel-413000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:flannel-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:07:07.876632    8886 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:07:07.885412    8886 out.go:177] * Starting "flannel-413000" primary control-plane node in "flannel-413000" cluster
	I0429 05:07:07.889358    8886 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:07:07.889373    8886 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 05:07:07.889382    8886 cache.go:56] Caching tarball of preloaded images
	I0429 05:07:07.889444    8886 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 05:07:07.889449    8886 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 05:07:07.889498    8886 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/flannel-413000/config.json ...
	I0429 05:07:07.889508    8886 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/flannel-413000/config.json: {Name:mk2b29fff2954d48ae8cf68dda225e15e9a09d55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:07:07.889727    8886 start.go:360] acquireMachinesLock for flannel-413000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:07:07.889756    8886 start.go:364] duration metric: took 24µs to acquireMachinesLock for "flannel-413000"
	I0429 05:07:07.889767    8886 start.go:93] Provisioning new machine with config: &{Name:flannel-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:flannel-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:07:07.889790    8886 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:07:07.898357    8886 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0429 05:07:07.914282    8886 start.go:159] libmachine.API.Create for "flannel-413000" (driver="qemu2")
	I0429 05:07:07.914308    8886 client.go:168] LocalClient.Create starting
	I0429 05:07:07.914389    8886 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:07:07.914423    8886 main.go:141] libmachine: Decoding PEM data...
	I0429 05:07:07.914433    8886 main.go:141] libmachine: Parsing certificate...
	I0429 05:07:07.914494    8886 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:07:07.914521    8886 main.go:141] libmachine: Decoding PEM data...
	I0429 05:07:07.914528    8886 main.go:141] libmachine: Parsing certificate...
	I0429 05:07:07.914960    8886 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:07:08.059564    8886 main.go:141] libmachine: Creating SSH key...
	I0429 05:07:08.186231    8886 main.go:141] libmachine: Creating Disk image...
	I0429 05:07:08.186237    8886 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:07:08.186414    8886 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/flannel-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/flannel-413000/disk.qcow2
	I0429 05:07:08.199156    8886 main.go:141] libmachine: STDOUT: 
	I0429 05:07:08.199176    8886 main.go:141] libmachine: STDERR: 
	I0429 05:07:08.199232    8886 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/flannel-413000/disk.qcow2 +20000M
	I0429 05:07:08.210353    8886 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:07:08.210372    8886 main.go:141] libmachine: STDERR: 
	I0429 05:07:08.210394    8886 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/flannel-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/flannel-413000/disk.qcow2
	I0429 05:07:08.210399    8886 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:07:08.210430    8886 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/flannel-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/flannel-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/flannel-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:3d:97:51:c7:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/flannel-413000/disk.qcow2
	I0429 05:07:08.212084    8886 main.go:141] libmachine: STDOUT: 
	I0429 05:07:08.212100    8886 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:07:08.212125    8886 client.go:171] duration metric: took 297.807833ms to LocalClient.Create
	I0429 05:07:10.214326    8886 start.go:128] duration metric: took 2.324506791s to createHost
	I0429 05:07:10.214402    8886 start.go:83] releasing machines lock for "flannel-413000", held for 2.324641s
	W0429 05:07:10.214502    8886 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:07:10.229919    8886 out.go:177] * Deleting "flannel-413000" in qemu2 ...
	W0429 05:07:10.258933    8886 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:07:10.259008    8886 start.go:728] Will try again in 5 seconds ...
	I0429 05:07:15.261218    8886 start.go:360] acquireMachinesLock for flannel-413000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:07:15.261750    8886 start.go:364] duration metric: took 440.375µs to acquireMachinesLock for "flannel-413000"
	I0429 05:07:15.261904    8886 start.go:93] Provisioning new machine with config: &{Name:flannel-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:flannel-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:07:15.262217    8886 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:07:15.267901    8886 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0429 05:07:15.317421    8886 start.go:159] libmachine.API.Create for "flannel-413000" (driver="qemu2")
	I0429 05:07:15.317472    8886 client.go:168] LocalClient.Create starting
	I0429 05:07:15.317594    8886 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:07:15.317667    8886 main.go:141] libmachine: Decoding PEM data...
	I0429 05:07:15.317685    8886 main.go:141] libmachine: Parsing certificate...
	I0429 05:07:15.317746    8886 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:07:15.317788    8886 main.go:141] libmachine: Decoding PEM data...
	I0429 05:07:15.317801    8886 main.go:141] libmachine: Parsing certificate...
	I0429 05:07:15.318330    8886 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:07:15.471851    8886 main.go:141] libmachine: Creating SSH key...
	I0429 05:07:15.526803    8886 main.go:141] libmachine: Creating Disk image...
	I0429 05:07:15.526809    8886 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:07:15.527015    8886 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/flannel-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/flannel-413000/disk.qcow2
	I0429 05:07:15.539641    8886 main.go:141] libmachine: STDOUT: 
	I0429 05:07:15.539663    8886 main.go:141] libmachine: STDERR: 
	I0429 05:07:15.539716    8886 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/flannel-413000/disk.qcow2 +20000M
	I0429 05:07:15.551269    8886 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:07:15.551288    8886 main.go:141] libmachine: STDERR: 
	I0429 05:07:15.551306    8886 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/flannel-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/flannel-413000/disk.qcow2
	I0429 05:07:15.551309    8886 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:07:15.551343    8886 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/flannel-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/flannel-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/flannel-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:0d:67:6b:67:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/flannel-413000/disk.qcow2
	I0429 05:07:15.553090    8886 main.go:141] libmachine: STDOUT: 
	I0429 05:07:15.553107    8886 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:07:15.553121    8886 client.go:171] duration metric: took 235.639625ms to LocalClient.Create
	I0429 05:07:17.555216    8886 start.go:128] duration metric: took 2.292982166s to createHost
	I0429 05:07:17.555249    8886 start.go:83] releasing machines lock for "flannel-413000", held for 2.293484083s
	W0429 05:07:17.555397    8886 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:07:17.571832    8886 out.go:177] 
	W0429 05:07:17.575805    8886 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:07:17.575814    8886 out.go:239] * 
	* 
	W0429 05:07:17.576612    8886 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:07:17.584786    8886 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-413000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-413000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.7917265s)

                                                
                                                
-- stdout --
	* [enable-default-cni-413000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-413000" primary control-plane node in "enable-default-cni-413000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-413000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:07:20.071491    9007 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:07:20.071614    9007 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:07:20.071617    9007 out.go:304] Setting ErrFile to fd 2...
	I0429 05:07:20.071619    9007 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:07:20.071745    9007 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:07:20.072888    9007 out.go:298] Setting JSON to false
	I0429 05:07:20.089006    9007 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5811,"bootTime":1714386629,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 05:07:20.089111    9007 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:07:20.094010    9007 out.go:177] * [enable-default-cni-413000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 05:07:20.106122    9007 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 05:07:20.102198    9007 notify.go:220] Checking for updates...
	I0429 05:07:20.112135    9007 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:07:20.116068    9007 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 05:07:20.119120    9007 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:07:20.122149    9007 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 05:07:20.125129    9007 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:07:20.128543    9007 config.go:182] Loaded profile config "multinode-368000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:07:20.128608    9007 config.go:182] Loaded profile config "stopped-upgrade-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 05:07:20.128666    9007 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:07:20.133110    9007 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 05:07:20.140161    9007 start.go:297] selected driver: qemu2
	I0429 05:07:20.140167    9007 start.go:901] validating driver "qemu2" against <nil>
	I0429 05:07:20.140173    9007 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:07:20.142507    9007 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 05:07:20.145187    9007 out.go:177] * Automatically selected the socket_vmnet network
	E0429 05:07:20.148258    9007 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0429 05:07:20.148273    9007 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 05:07:20.148298    9007 cni.go:84] Creating CNI manager for "bridge"
	I0429 05:07:20.148305    9007 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 05:07:20.148335    9007 start.go:340] cluster config:
	{Name:enable-default-cni-413000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:enable-default-cni-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:07:20.152837    9007 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:07:20.160152    9007 out.go:177] * Starting "enable-default-cni-413000" primary control-plane node in "enable-default-cni-413000" cluster
	I0429 05:07:20.164137    9007 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:07:20.164153    9007 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 05:07:20.164160    9007 cache.go:56] Caching tarball of preloaded images
	I0429 05:07:20.164220    9007 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 05:07:20.164225    9007 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 05:07:20.164272    9007 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/enable-default-cni-413000/config.json ...
	I0429 05:07:20.164283    9007 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/enable-default-cni-413000/config.json: {Name:mkdc811609c8fcf23b728cdb223e60ecdb9bdad7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:07:20.164503    9007 start.go:360] acquireMachinesLock for enable-default-cni-413000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:07:20.164539    9007 start.go:364] duration metric: took 27.709µs to acquireMachinesLock for "enable-default-cni-413000"
	I0429 05:07:20.164551    9007 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:enable-default-cni-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:07:20.164588    9007 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:07:20.173104    9007 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0429 05:07:20.189948    9007 start.go:159] libmachine.API.Create for "enable-default-cni-413000" (driver="qemu2")
	I0429 05:07:20.189974    9007 client.go:168] LocalClient.Create starting
	I0429 05:07:20.190032    9007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:07:20.190062    9007 main.go:141] libmachine: Decoding PEM data...
	I0429 05:07:20.190070    9007 main.go:141] libmachine: Parsing certificate...
	I0429 05:07:20.190111    9007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:07:20.190133    9007 main.go:141] libmachine: Decoding PEM data...
	I0429 05:07:20.190139    9007 main.go:141] libmachine: Parsing certificate...
	I0429 05:07:20.190506    9007 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:07:20.338804    9007 main.go:141] libmachine: Creating SSH key...
	I0429 05:07:20.373140    9007 main.go:141] libmachine: Creating Disk image...
	I0429 05:07:20.373146    9007 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:07:20.373331    9007 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/enable-default-cni-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/enable-default-cni-413000/disk.qcow2
	I0429 05:07:20.386451    9007 main.go:141] libmachine: STDOUT: 
	I0429 05:07:20.386480    9007 main.go:141] libmachine: STDERR: 
	I0429 05:07:20.386543    9007 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/enable-default-cni-413000/disk.qcow2 +20000M
	I0429 05:07:20.398169    9007 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:07:20.398195    9007 main.go:141] libmachine: STDERR: 
	I0429 05:07:20.398222    9007 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/enable-default-cni-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/enable-default-cni-413000/disk.qcow2
	I0429 05:07:20.398226    9007 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:07:20.398259    9007 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/enable-default-cni-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/enable-default-cni-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/enable-default-cni-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:4f:aa:1c:99:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/enable-default-cni-413000/disk.qcow2
	I0429 05:07:20.400107    9007 main.go:141] libmachine: STDOUT: 
	I0429 05:07:20.400125    9007 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:07:20.400141    9007 client.go:171] duration metric: took 210.163875ms to LocalClient.Create
	I0429 05:07:22.402304    9007 start.go:128] duration metric: took 2.237697208s to createHost
	I0429 05:07:22.402409    9007 start.go:83] releasing machines lock for "enable-default-cni-413000", held for 2.237865792s
	W0429 05:07:22.402492    9007 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:07:22.415202    9007 out.go:177] * Deleting "enable-default-cni-413000" in qemu2 ...
	W0429 05:07:22.439884    9007 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:07:22.439919    9007 start.go:728] Will try again in 5 seconds ...
	I0429 05:07:27.442058    9007 start.go:360] acquireMachinesLock for enable-default-cni-413000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:07:27.442315    9007 start.go:364] duration metric: took 194.875µs to acquireMachinesLock for "enable-default-cni-413000"
	I0429 05:07:27.442344    9007 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:enable-default-cni-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:07:27.442456    9007 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:07:27.449745    9007 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0429 05:07:27.473362    9007 start.go:159] libmachine.API.Create for "enable-default-cni-413000" (driver="qemu2")
	I0429 05:07:27.473400    9007 client.go:168] LocalClient.Create starting
	I0429 05:07:27.473473    9007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:07:27.473524    9007 main.go:141] libmachine: Decoding PEM data...
	I0429 05:07:27.473534    9007 main.go:141] libmachine: Parsing certificate...
	I0429 05:07:27.473573    9007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:07:27.473600    9007 main.go:141] libmachine: Decoding PEM data...
	I0429 05:07:27.473608    9007 main.go:141] libmachine: Parsing certificate...
	I0429 05:07:27.473924    9007 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:07:27.621291    9007 main.go:141] libmachine: Creating SSH key...
	I0429 05:07:27.761346    9007 main.go:141] libmachine: Creating Disk image...
	I0429 05:07:27.761353    9007 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:07:27.761564    9007 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/enable-default-cni-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/enable-default-cni-413000/disk.qcow2
	I0429 05:07:27.774476    9007 main.go:141] libmachine: STDOUT: 
	I0429 05:07:27.774499    9007 main.go:141] libmachine: STDERR: 
	I0429 05:07:27.774558    9007 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/enable-default-cni-413000/disk.qcow2 +20000M
	I0429 05:07:27.785628    9007 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:07:27.785646    9007 main.go:141] libmachine: STDERR: 
	I0429 05:07:27.785659    9007 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/enable-default-cni-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/enable-default-cni-413000/disk.qcow2
	I0429 05:07:27.785663    9007 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:07:27.785692    9007 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/enable-default-cni-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/enable-default-cni-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/enable-default-cni-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:64:0e:f7:14:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/enable-default-cni-413000/disk.qcow2
	I0429 05:07:27.787475    9007 main.go:141] libmachine: STDOUT: 
	I0429 05:07:27.787495    9007 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:07:27.787510    9007 client.go:171] duration metric: took 314.105667ms to LocalClient.Create
	I0429 05:07:29.789716    9007 start.go:128] duration metric: took 2.347235334s to createHost
	I0429 05:07:29.789832    9007 start.go:83] releasing machines lock for "enable-default-cni-413000", held for 2.347494291s
	W0429 05:07:29.790248    9007 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:07:29.802668    9007 out.go:177] 
	W0429 05:07:29.806833    9007 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:07:29.806864    9007 out.go:239] * 
	* 
	W0429 05:07:29.817803    9007 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:07:29.820899    9007 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-413000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-413000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.9266375s)

                                                
                                                
-- stdout --
	* [bridge-413000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-413000" primary control-plane node in "bridge-413000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-413000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:07:32.147202    9117 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:07:32.147341    9117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:07:32.147345    9117 out.go:304] Setting ErrFile to fd 2...
	I0429 05:07:32.147347    9117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:07:32.147480    9117 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:07:32.148767    9117 out.go:298] Setting JSON to false
	I0429 05:07:32.167215    9117 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5823,"bootTime":1714386629,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 05:07:32.167302    9117 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:07:32.172588    9117 out.go:177] * [bridge-413000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 05:07:32.179662    9117 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 05:07:32.179750    9117 notify.go:220] Checking for updates...
	I0429 05:07:32.186569    9117 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:07:32.189526    9117 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 05:07:32.192579    9117 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:07:32.195607    9117 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 05:07:32.196733    9117 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:07:32.199970    9117 config.go:182] Loaded profile config "multinode-368000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:07:32.200036    9117 config.go:182] Loaded profile config "stopped-upgrade-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 05:07:32.200080    9117 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:07:32.203586    9117 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 05:07:32.208579    9117 start.go:297] selected driver: qemu2
	I0429 05:07:32.208589    9117 start.go:901] validating driver "qemu2" against <nil>
	I0429 05:07:32.208596    9117 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:07:32.211020    9117 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 05:07:32.213560    9117 out.go:177] * Automatically selected the socket_vmnet network
	I0429 05:07:32.216670    9117 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 05:07:32.216710    9117 cni.go:84] Creating CNI manager for "bridge"
	I0429 05:07:32.216714    9117 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 05:07:32.216764    9117 start.go:340] cluster config:
	{Name:bridge-413000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:bridge-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:07:32.221553    9117 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:07:32.229577    9117 out.go:177] * Starting "bridge-413000" primary control-plane node in "bridge-413000" cluster
	I0429 05:07:32.233487    9117 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:07:32.233512    9117 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 05:07:32.233516    9117 cache.go:56] Caching tarball of preloaded images
	I0429 05:07:32.233591    9117 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 05:07:32.233597    9117 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 05:07:32.233663    9117 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/bridge-413000/config.json ...
	I0429 05:07:32.233674    9117 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/bridge-413000/config.json: {Name:mk43847939e0e43db495c111e3008fb948ddbaf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:07:32.234011    9117 start.go:360] acquireMachinesLock for bridge-413000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:07:32.234042    9117 start.go:364] duration metric: took 26.084µs to acquireMachinesLock for "bridge-413000"
	I0429 05:07:32.234058    9117 start.go:93] Provisioning new machine with config: &{Name:bridge-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:bridge-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:07:32.234088    9117 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:07:32.238615    9117 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0429 05:07:32.254512    9117 start.go:159] libmachine.API.Create for "bridge-413000" (driver="qemu2")
	I0429 05:07:32.254541    9117 client.go:168] LocalClient.Create starting
	I0429 05:07:32.254612    9117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:07:32.254644    9117 main.go:141] libmachine: Decoding PEM data...
	I0429 05:07:32.254657    9117 main.go:141] libmachine: Parsing certificate...
	I0429 05:07:32.254693    9117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:07:32.254715    9117 main.go:141] libmachine: Decoding PEM data...
	I0429 05:07:32.254721    9117 main.go:141] libmachine: Parsing certificate...
	I0429 05:07:32.255068    9117 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:07:32.401390    9117 main.go:141] libmachine: Creating SSH key...
	I0429 05:07:32.677159    9117 main.go:141] libmachine: Creating Disk image...
	I0429 05:07:32.677171    9117 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:07:32.677368    9117 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/bridge-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/bridge-413000/disk.qcow2
	I0429 05:07:32.690173    9117 main.go:141] libmachine: STDOUT: 
	I0429 05:07:32.690198    9117 main.go:141] libmachine: STDERR: 
	I0429 05:07:32.690258    9117 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/bridge-413000/disk.qcow2 +20000M
	I0429 05:07:32.701445    9117 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:07:32.701465    9117 main.go:141] libmachine: STDERR: 
	I0429 05:07:32.701481    9117 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/bridge-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/bridge-413000/disk.qcow2
	I0429 05:07:32.701485    9117 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:07:32.701520    9117 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/bridge-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/bridge-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/bridge-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:0a:aa:62:08:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/bridge-413000/disk.qcow2
	I0429 05:07:32.703242    9117 main.go:141] libmachine: STDOUT: 
	I0429 05:07:32.703256    9117 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:07:32.703277    9117 client.go:171] duration metric: took 448.730542ms to LocalClient.Create
	I0429 05:07:34.705389    9117 start.go:128] duration metric: took 2.47129375s to createHost
	I0429 05:07:34.705411    9117 start.go:83] releasing machines lock for "bridge-413000", held for 2.471369125s
	W0429 05:07:34.705443    9117 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:07:34.713614    9117 out.go:177] * Deleting "bridge-413000" in qemu2 ...
	W0429 05:07:34.723930    9117 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:07:34.723937    9117 start.go:728] Will try again in 5 seconds ...
	I0429 05:07:39.726109    9117 start.go:360] acquireMachinesLock for bridge-413000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:07:39.726331    9117 start.go:364] duration metric: took 183.375µs to acquireMachinesLock for "bridge-413000"
	I0429 05:07:39.726381    9117 start.go:93] Provisioning new machine with config: &{Name:bridge-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:bridge-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:07:39.726463    9117 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:07:39.734573    9117 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0429 05:07:39.756512    9117 start.go:159] libmachine.API.Create for "bridge-413000" (driver="qemu2")
	I0429 05:07:39.756538    9117 client.go:168] LocalClient.Create starting
	I0429 05:07:39.756617    9117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:07:39.756660    9117 main.go:141] libmachine: Decoding PEM data...
	I0429 05:07:39.756669    9117 main.go:141] libmachine: Parsing certificate...
	I0429 05:07:39.756714    9117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:07:39.756743    9117 main.go:141] libmachine: Decoding PEM data...
	I0429 05:07:39.756749    9117 main.go:141] libmachine: Parsing certificate...
	I0429 05:07:39.757090    9117 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:07:39.905813    9117 main.go:141] libmachine: Creating SSH key...
	I0429 05:07:39.965397    9117 main.go:141] libmachine: Creating Disk image...
	I0429 05:07:39.965408    9117 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:07:39.965656    9117 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/bridge-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/bridge-413000/disk.qcow2
	I0429 05:07:39.979661    9117 main.go:141] libmachine: STDOUT: 
	I0429 05:07:39.979682    9117 main.go:141] libmachine: STDERR: 
	I0429 05:07:39.979772    9117 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/bridge-413000/disk.qcow2 +20000M
	I0429 05:07:39.992667    9117 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:07:39.992688    9117 main.go:141] libmachine: STDERR: 
	I0429 05:07:39.992709    9117 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/bridge-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/bridge-413000/disk.qcow2
	I0429 05:07:39.992717    9117 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:07:39.992766    9117 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/bridge-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/bridge-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/bridge-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:ac:a0:5c:6f:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/bridge-413000/disk.qcow2
	I0429 05:07:39.994806    9117 main.go:141] libmachine: STDOUT: 
	I0429 05:07:39.994823    9117 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:07:39.994839    9117 client.go:171] duration metric: took 238.297458ms to LocalClient.Create
	I0429 05:07:41.996958    9117 start.go:128] duration metric: took 2.270472792s to createHost
	I0429 05:07:41.996999    9117 start.go:83] releasing machines lock for "bridge-413000", held for 2.270663333s
	W0429 05:07:41.997278    9117 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:07:42.007800    9117 out.go:177] 
	W0429 05:07:42.014866    9117 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:07:42.014900    9117 out.go:239] * 
	* 
	W0429 05:07:42.016368    9117 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:07:42.026732    9117 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-413000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-413000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.846443625s)

                                                
                                                
-- stdout --
	* [kubenet-413000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-413000" primary control-plane node in "kubenet-413000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-413000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:07:44.288902    9230 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:07:44.289042    9230 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:07:44.289045    9230 out.go:304] Setting ErrFile to fd 2...
	I0429 05:07:44.289047    9230 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:07:44.289179    9230 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:07:44.290391    9230 out.go:298] Setting JSON to false
	I0429 05:07:44.307865    9230 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5835,"bootTime":1714386629,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 05:07:44.307954    9230 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:07:44.313696    9230 out.go:177] * [kubenet-413000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 05:07:44.321788    9230 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 05:07:44.324539    9230 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:07:44.321880    9230 notify.go:220] Checking for updates...
	I0429 05:07:44.327639    9230 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 05:07:44.330629    9230 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:07:44.332205    9230 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 05:07:44.335551    9230 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:07:44.338936    9230 config.go:182] Loaded profile config "multinode-368000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:07:44.339006    9230 config.go:182] Loaded profile config "stopped-upgrade-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 05:07:44.339052    9230 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:07:44.343480    9230 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 05:07:44.350563    9230 start.go:297] selected driver: qemu2
	I0429 05:07:44.350569    9230 start.go:901] validating driver "qemu2" against <nil>
	I0429 05:07:44.350575    9230 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:07:44.352688    9230 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 05:07:44.355721    9230 out.go:177] * Automatically selected the socket_vmnet network
	I0429 05:07:44.358715    9230 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 05:07:44.358757    9230 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0429 05:07:44.358785    9230 start.go:340] cluster config:
	{Name:kubenet-413000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubenet-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:07:44.363006    9230 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:07:44.367635    9230 out.go:177] * Starting "kubenet-413000" primary control-plane node in "kubenet-413000" cluster
	I0429 05:07:44.375558    9230 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:07:44.375570    9230 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 05:07:44.375576    9230 cache.go:56] Caching tarball of preloaded images
	I0429 05:07:44.375627    9230 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 05:07:44.375632    9230 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 05:07:44.375674    9230 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/kubenet-413000/config.json ...
	I0429 05:07:44.375683    9230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/kubenet-413000/config.json: {Name:mk72f526f5b8b49068bea253e3c249a4c13bf9c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:07:44.375887    9230 start.go:360] acquireMachinesLock for kubenet-413000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:07:44.375917    9230 start.go:364] duration metric: took 24.458µs to acquireMachinesLock for "kubenet-413000"
	I0429 05:07:44.375927    9230 start.go:93] Provisioning new machine with config: &{Name:kubenet-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:kubenet-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:07:44.375954    9230 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:07:44.383567    9230 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0429 05:07:44.399175    9230 start.go:159] libmachine.API.Create for "kubenet-413000" (driver="qemu2")
	I0429 05:07:44.399204    9230 client.go:168] LocalClient.Create starting
	I0429 05:07:44.399284    9230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:07:44.399315    9230 main.go:141] libmachine: Decoding PEM data...
	I0429 05:07:44.399331    9230 main.go:141] libmachine: Parsing certificate...
	I0429 05:07:44.399374    9230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:07:44.399397    9230 main.go:141] libmachine: Decoding PEM data...
	I0429 05:07:44.399404    9230 main.go:141] libmachine: Parsing certificate...
	I0429 05:07:44.399829    9230 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:07:44.546103    9230 main.go:141] libmachine: Creating SSH key...
	I0429 05:07:44.632154    9230 main.go:141] libmachine: Creating Disk image...
	I0429 05:07:44.632164    9230 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:07:44.632381    9230 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubenet-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubenet-413000/disk.qcow2
	I0429 05:07:44.645462    9230 main.go:141] libmachine: STDOUT: 
	I0429 05:07:44.645513    9230 main.go:141] libmachine: STDERR: 
	I0429 05:07:44.645584    9230 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubenet-413000/disk.qcow2 +20000M
	I0429 05:07:44.656874    9230 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:07:44.656890    9230 main.go:141] libmachine: STDERR: 
	I0429 05:07:44.656914    9230 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubenet-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubenet-413000/disk.qcow2
	I0429 05:07:44.656920    9230 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:07:44.656948    9230 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubenet-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubenet-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubenet-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:8b:ae:0a:1b:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubenet-413000/disk.qcow2
	I0429 05:07:44.658770    9230 main.go:141] libmachine: STDOUT: 
	I0429 05:07:44.658785    9230 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:07:44.658804    9230 client.go:171] duration metric: took 259.594375ms to LocalClient.Create
	I0429 05:07:46.660924    9230 start.go:128] duration metric: took 2.284963917s to createHost
	I0429 05:07:46.660992    9230 start.go:83] releasing machines lock for "kubenet-413000", held for 2.285074958s
	W0429 05:07:46.661021    9230 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:07:46.674860    9230 out.go:177] * Deleting "kubenet-413000" in qemu2 ...
	W0429 05:07:46.693218    9230 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:07:46.693229    9230 start.go:728] Will try again in 5 seconds ...
	I0429 05:07:51.695464    9230 start.go:360] acquireMachinesLock for kubenet-413000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:07:51.696074    9230 start.go:364] duration metric: took 467µs to acquireMachinesLock for "kubenet-413000"
	I0429 05:07:51.696170    9230 start.go:93] Provisioning new machine with config: &{Name:kubenet-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:kubenet-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:07:51.696468    9230 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:07:51.705108    9230 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0429 05:07:51.748695    9230 start.go:159] libmachine.API.Create for "kubenet-413000" (driver="qemu2")
	I0429 05:07:51.748749    9230 client.go:168] LocalClient.Create starting
	I0429 05:07:51.748873    9230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:07:51.748948    9230 main.go:141] libmachine: Decoding PEM data...
	I0429 05:07:51.748963    9230 main.go:141] libmachine: Parsing certificate...
	I0429 05:07:51.749021    9230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:07:51.749061    9230 main.go:141] libmachine: Decoding PEM data...
	I0429 05:07:51.749085    9230 main.go:141] libmachine: Parsing certificate...
	I0429 05:07:51.749550    9230 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:07:51.903083    9230 main.go:141] libmachine: Creating SSH key...
	I0429 05:07:52.034464    9230 main.go:141] libmachine: Creating Disk image...
	I0429 05:07:52.034471    9230 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:07:52.034681    9230 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubenet-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubenet-413000/disk.qcow2
	I0429 05:07:52.047845    9230 main.go:141] libmachine: STDOUT: 
	I0429 05:07:52.047867    9230 main.go:141] libmachine: STDERR: 
	I0429 05:07:52.047928    9230 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubenet-413000/disk.qcow2 +20000M
	I0429 05:07:52.058852    9230 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:07:52.058868    9230 main.go:141] libmachine: STDERR: 
	I0429 05:07:52.058881    9230 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubenet-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubenet-413000/disk.qcow2
	I0429 05:07:52.058887    9230 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:07:52.058925    9230 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubenet-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubenet-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubenet-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:e8:03:1b:fa:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/kubenet-413000/disk.qcow2
	I0429 05:07:52.060611    9230 main.go:141] libmachine: STDOUT: 
	I0429 05:07:52.060627    9230 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:07:52.060641    9230 client.go:171] duration metric: took 311.885583ms to LocalClient.Create
	I0429 05:07:54.062852    9230 start.go:128] duration metric: took 2.366350584s to createHost
	I0429 05:07:54.062982    9230 start.go:83] releasing machines lock for "kubenet-413000", held for 2.366854334s
	W0429 05:07:54.063319    9230 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:07:54.073098    9230 out.go:177] 
	W0429 05:07:54.080136    9230 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:07:54.080192    9230 out.go:239] * 
	* 
	W0429 05:07:54.082832    9230 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:07:54.089875    9230 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-413000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-413000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.850057s)

                                                
                                                
-- stdout --
	* [custom-flannel-413000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-413000" primary control-plane node in "custom-flannel-413000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-413000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:07:56.367202    9343 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:07:56.367320    9343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:07:56.367323    9343 out.go:304] Setting ErrFile to fd 2...
	I0429 05:07:56.367325    9343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:07:56.367465    9343 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:07:56.368534    9343 out.go:298] Setting JSON to false
	I0429 05:07:56.385083    9343 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5847,"bootTime":1714386629,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 05:07:56.385144    9343 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:07:56.390539    9343 out.go:177] * [custom-flannel-413000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 05:07:56.398555    9343 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 05:07:56.402493    9343 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:07:56.398587    9343 notify.go:220] Checking for updates...
	I0429 05:07:56.408524    9343 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 05:07:56.411524    9343 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:07:56.414482    9343 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 05:07:56.417569    9343 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:07:56.420751    9343 config.go:182] Loaded profile config "multinode-368000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:07:56.420823    9343 config.go:182] Loaded profile config "stopped-upgrade-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 05:07:56.420873    9343 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:07:56.425491    9343 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 05:07:56.431486    9343 start.go:297] selected driver: qemu2
	I0429 05:07:56.431493    9343 start.go:901] validating driver "qemu2" against <nil>
	I0429 05:07:56.431500    9343 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:07:56.433737    9343 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 05:07:56.437513    9343 out.go:177] * Automatically selected the socket_vmnet network
	I0429 05:07:56.440627    9343 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 05:07:56.440682    9343 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0429 05:07:56.440696    9343 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0429 05:07:56.440741    9343 start.go:340] cluster config:
	{Name:custom-flannel-413000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:custom-flannel-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:07:56.445090    9343 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:07:56.453523    9343 out.go:177] * Starting "custom-flannel-413000" primary control-plane node in "custom-flannel-413000" cluster
	I0429 05:07:56.457596    9343 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:07:56.457613    9343 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 05:07:56.457624    9343 cache.go:56] Caching tarball of preloaded images
	I0429 05:07:56.457693    9343 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 05:07:56.457698    9343 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 05:07:56.457757    9343 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/custom-flannel-413000/config.json ...
	I0429 05:07:56.457769    9343 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/custom-flannel-413000/config.json: {Name:mk028e0beb8bff19ceddfb1256abef908961c267 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:07:56.458005    9343 start.go:360] acquireMachinesLock for custom-flannel-413000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:07:56.458039    9343 start.go:364] duration metric: took 27.458µs to acquireMachinesLock for "custom-flannel-413000"
	I0429 05:07:56.458051    9343 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.0 ClusterName:custom-flannel-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:07:56.458077    9343 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:07:56.466551    9343 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0429 05:07:56.484115    9343 start.go:159] libmachine.API.Create for "custom-flannel-413000" (driver="qemu2")
	I0429 05:07:56.484138    9343 client.go:168] LocalClient.Create starting
	I0429 05:07:56.484192    9343 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:07:56.484224    9343 main.go:141] libmachine: Decoding PEM data...
	I0429 05:07:56.484238    9343 main.go:141] libmachine: Parsing certificate...
	I0429 05:07:56.484274    9343 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:07:56.484296    9343 main.go:141] libmachine: Decoding PEM data...
	I0429 05:07:56.484304    9343 main.go:141] libmachine: Parsing certificate...
	I0429 05:07:56.484647    9343 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:07:56.632345    9343 main.go:141] libmachine: Creating SSH key...
	I0429 05:07:56.816818    9343 main.go:141] libmachine: Creating Disk image...
	I0429 05:07:56.816831    9343 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:07:56.817066    9343 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/custom-flannel-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/custom-flannel-413000/disk.qcow2
	I0429 05:07:56.830136    9343 main.go:141] libmachine: STDOUT: 
	I0429 05:07:56.830167    9343 main.go:141] libmachine: STDERR: 
	I0429 05:07:56.830228    9343 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/custom-flannel-413000/disk.qcow2 +20000M
	I0429 05:07:56.841607    9343 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:07:56.841630    9343 main.go:141] libmachine: STDERR: 
	I0429 05:07:56.841653    9343 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/custom-flannel-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/custom-flannel-413000/disk.qcow2
	I0429 05:07:56.841658    9343 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:07:56.841686    9343 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/custom-flannel-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/custom-flannel-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/custom-flannel-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:27:59:85:44:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/custom-flannel-413000/disk.qcow2
	I0429 05:07:56.843382    9343 main.go:141] libmachine: STDOUT: 
	I0429 05:07:56.843398    9343 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:07:56.843425    9343 client.go:171] duration metric: took 359.284042ms to LocalClient.Create
	I0429 05:07:58.845611    9343 start.go:128] duration metric: took 2.387507334s to createHost
	I0429 05:07:58.845707    9343 start.go:83] releasing machines lock for "custom-flannel-413000", held for 2.387665s
	W0429 05:07:58.845803    9343 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:07:58.854255    9343 out.go:177] * Deleting "custom-flannel-413000" in qemu2 ...
	W0429 05:07:58.872894    9343 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:07:58.872924    9343 start.go:728] Will try again in 5 seconds ...
	I0429 05:08:03.875286    9343 start.go:360] acquireMachinesLock for custom-flannel-413000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:08:03.875667    9343 start.go:364] duration metric: took 112.667µs to acquireMachinesLock for "custom-flannel-413000"
	I0429 05:08:03.875686    9343 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.0 ClusterName:custom-flannel-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:08:03.875750    9343 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:08:03.879954    9343 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0429 05:08:03.895513    9343 start.go:159] libmachine.API.Create for "custom-flannel-413000" (driver="qemu2")
	I0429 05:08:03.895540    9343 client.go:168] LocalClient.Create starting
	I0429 05:08:03.895607    9343 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:08:03.895636    9343 main.go:141] libmachine: Decoding PEM data...
	I0429 05:08:03.895647    9343 main.go:141] libmachine: Parsing certificate...
	I0429 05:08:03.895679    9343 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:08:03.895701    9343 main.go:141] libmachine: Decoding PEM data...
	I0429 05:08:03.895708    9343 main.go:141] libmachine: Parsing certificate...
	I0429 05:08:03.895952    9343 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:08:04.041393    9343 main.go:141] libmachine: Creating SSH key...
	I0429 05:08:04.112211    9343 main.go:141] libmachine: Creating Disk image...
	I0429 05:08:04.112220    9343 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:08:04.112418    9343 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/custom-flannel-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/custom-flannel-413000/disk.qcow2
	I0429 05:08:04.125351    9343 main.go:141] libmachine: STDOUT: 
	I0429 05:08:04.125372    9343 main.go:141] libmachine: STDERR: 
	I0429 05:08:04.125426    9343 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/custom-flannel-413000/disk.qcow2 +20000M
	I0429 05:08:04.136597    9343 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:08:04.136683    9343 main.go:141] libmachine: STDERR: 
	I0429 05:08:04.136700    9343 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/custom-flannel-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/custom-flannel-413000/disk.qcow2
	I0429 05:08:04.136712    9343 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:08:04.136747    9343 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/custom-flannel-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/custom-flannel-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/custom-flannel-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:39:6f:d6:7c:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/custom-flannel-413000/disk.qcow2
	I0429 05:08:04.138578    9343 main.go:141] libmachine: STDOUT: 
	I0429 05:08:04.138665    9343 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:08:04.138682    9343 client.go:171] duration metric: took 243.136292ms to LocalClient.Create
	I0429 05:08:06.140901    9343 start.go:128] duration metric: took 2.265118875s to createHost
	I0429 05:08:06.141030    9343 start.go:83] releasing machines lock for "custom-flannel-413000", held for 2.265344542s
	W0429 05:08:06.141390    9343 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:08:06.151212    9343 out.go:177] 
	W0429 05:08:06.158218    9343 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:08:06.158241    9343 out.go:239] * 
	* 
	W0429 05:08:06.161231    9343 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:08:06.172194    9343 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-413000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-413000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.825975084s)

                                                
                                                
-- stdout --
	* [calico-413000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-413000" primary control-plane node in "calico-413000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-413000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:08:08.637366    9464 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:08:08.637511    9464 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:08:08.637515    9464 out.go:304] Setting ErrFile to fd 2...
	I0429 05:08:08.637517    9464 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:08:08.637631    9464 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:08:08.638696    9464 out.go:298] Setting JSON to false
	I0429 05:08:08.654782    9464 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5859,"bootTime":1714386629,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 05:08:08.654854    9464 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:08:08.660539    9464 out.go:177] * [calico-413000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 05:08:08.668547    9464 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 05:08:08.668630    9464 notify.go:220] Checking for updates...
	I0429 05:08:08.673941    9464 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:08:08.681471    9464 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 05:08:08.685367    9464 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:08:08.688457    9464 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 05:08:08.691453    9464 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:08:08.694842    9464 config.go:182] Loaded profile config "multinode-368000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:08:08.694917    9464 config.go:182] Loaded profile config "stopped-upgrade-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 05:08:08.694960    9464 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:08:08.699423    9464 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 05:08:08.706312    9464 start.go:297] selected driver: qemu2
	I0429 05:08:08.706319    9464 start.go:901] validating driver "qemu2" against <nil>
	I0429 05:08:08.706325    9464 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:08:08.708671    9464 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 05:08:08.712478    9464 out.go:177] * Automatically selected the socket_vmnet network
	I0429 05:08:08.715557    9464 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 05:08:08.715595    9464 cni.go:84] Creating CNI manager for "calico"
	I0429 05:08:08.715599    9464 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0429 05:08:08.715638    9464 start.go:340] cluster config:
	{Name:calico-413000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:calico-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:08:08.720186    9464 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:08:08.724517    9464 out.go:177] * Starting "calico-413000" primary control-plane node in "calico-413000" cluster
	I0429 05:08:08.732541    9464 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:08:08.732574    9464 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 05:08:08.732587    9464 cache.go:56] Caching tarball of preloaded images
	I0429 05:08:08.732647    9464 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 05:08:08.732651    9464 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 05:08:08.732708    9464 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/calico-413000/config.json ...
	I0429 05:08:08.732719    9464 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/calico-413000/config.json: {Name:mk3893bc61597acb5a149a125540282f6bdfcfad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:08:08.732936    9464 start.go:360] acquireMachinesLock for calico-413000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:08:08.732966    9464 start.go:364] duration metric: took 24.583µs to acquireMachinesLock for "calico-413000"
	I0429 05:08:08.732976    9464 start.go:93] Provisioning new machine with config: &{Name:calico-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:calico-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:08:08.733010    9464 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:08:08.741479    9464 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0429 05:08:08.756692    9464 start.go:159] libmachine.API.Create for "calico-413000" (driver="qemu2")
	I0429 05:08:08.756723    9464 client.go:168] LocalClient.Create starting
	I0429 05:08:08.756787    9464 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:08:08.756816    9464 main.go:141] libmachine: Decoding PEM data...
	I0429 05:08:08.756825    9464 main.go:141] libmachine: Parsing certificate...
	I0429 05:08:08.756861    9464 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:08:08.756883    9464 main.go:141] libmachine: Decoding PEM data...
	I0429 05:08:08.756889    9464 main.go:141] libmachine: Parsing certificate...
	I0429 05:08:08.757238    9464 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:08:08.903496    9464 main.go:141] libmachine: Creating SSH key...
	I0429 05:08:09.024756    9464 main.go:141] libmachine: Creating Disk image...
	I0429 05:08:09.024767    9464 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:08:09.024969    9464 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/calico-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/calico-413000/disk.qcow2
	I0429 05:08:09.037733    9464 main.go:141] libmachine: STDOUT: 
	I0429 05:08:09.037754    9464 main.go:141] libmachine: STDERR: 
	I0429 05:08:09.037825    9464 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/calico-413000/disk.qcow2 +20000M
	I0429 05:08:09.049232    9464 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:08:09.049253    9464 main.go:141] libmachine: STDERR: 
	I0429 05:08:09.049269    9464 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/calico-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/calico-413000/disk.qcow2
	I0429 05:08:09.049272    9464 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:08:09.049315    9464 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/calico-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/calico-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/calico-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:18:d1:f4:d9:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/calico-413000/disk.qcow2
	I0429 05:08:09.051055    9464 main.go:141] libmachine: STDOUT: 
	I0429 05:08:09.051072    9464 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:08:09.051089    9464 client.go:171] duration metric: took 294.361042ms to LocalClient.Create
	I0429 05:08:11.053413    9464 start.go:128] duration metric: took 2.320275416s to createHost
	I0429 05:08:11.053499    9464 start.go:83] releasing machines lock for "calico-413000", held for 2.320529625s
	W0429 05:08:11.053611    9464 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:08:11.062874    9464 out.go:177] * Deleting "calico-413000" in qemu2 ...
	W0429 05:08:11.092220    9464 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:08:11.092276    9464 start.go:728] Will try again in 5 seconds ...
	I0429 05:08:16.094443    9464 start.go:360] acquireMachinesLock for calico-413000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:08:16.094620    9464 start.go:364] duration metric: took 128.084µs to acquireMachinesLock for "calico-413000"
	I0429 05:08:16.094659    9464 start.go:93] Provisioning new machine with config: &{Name:calico-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:calico-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:08:16.094711    9464 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:08:16.103919    9464 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0429 05:08:16.121663    9464 start.go:159] libmachine.API.Create for "calico-413000" (driver="qemu2")
	I0429 05:08:16.121689    9464 client.go:168] LocalClient.Create starting
	I0429 05:08:16.121763    9464 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:08:16.121801    9464 main.go:141] libmachine: Decoding PEM data...
	I0429 05:08:16.121810    9464 main.go:141] libmachine: Parsing certificate...
	I0429 05:08:16.121852    9464 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:08:16.121874    9464 main.go:141] libmachine: Decoding PEM data...
	I0429 05:08:16.121882    9464 main.go:141] libmachine: Parsing certificate...
	I0429 05:08:16.122164    9464 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:08:16.270073    9464 main.go:141] libmachine: Creating SSH key...
	I0429 05:08:16.363698    9464 main.go:141] libmachine: Creating Disk image...
	I0429 05:08:16.363704    9464 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:08:16.363905    9464 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/calico-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/calico-413000/disk.qcow2
	I0429 05:08:16.377403    9464 main.go:141] libmachine: STDOUT: 
	I0429 05:08:16.377429    9464 main.go:141] libmachine: STDERR: 
	I0429 05:08:16.377545    9464 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/calico-413000/disk.qcow2 +20000M
	I0429 05:08:16.389895    9464 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:08:16.389917    9464 main.go:141] libmachine: STDERR: 
	I0429 05:08:16.389927    9464 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/calico-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/calico-413000/disk.qcow2
	I0429 05:08:16.389934    9464 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:08:16.389964    9464 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/calico-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/calico-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/calico-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:23:9a:c2:c8:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/calico-413000/disk.qcow2
	I0429 05:08:16.391864    9464 main.go:141] libmachine: STDOUT: 
	I0429 05:08:16.391882    9464 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:08:16.391894    9464 client.go:171] duration metric: took 270.202375ms to LocalClient.Create
	I0429 05:08:18.394088    9464 start.go:128] duration metric: took 2.299349167s to createHost
	I0429 05:08:18.394154    9464 start.go:83] releasing machines lock for "calico-413000", held for 2.299529084s
	W0429 05:08:18.394473    9464 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:08:18.402866    9464 out.go:177] 
	W0429 05:08:18.408960    9464 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:08:18.408989    9464 out.go:239] * 
	* 
	W0429 05:08:18.411709    9464 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:08:18.417906    9464 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-413000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-413000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.918487041s)

                                                
                                                
-- stdout --
	* [false-413000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-413000" primary control-plane node in "false-413000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-413000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:08:21.001682    9582 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:08:21.001817    9582 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:08:21.001821    9582 out.go:304] Setting ErrFile to fd 2...
	I0429 05:08:21.001823    9582 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:08:21.001945    9582 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:08:21.002986    9582 out.go:298] Setting JSON to false
	I0429 05:08:21.019162    9582 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5872,"bootTime":1714386629,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 05:08:21.019229    9582 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:08:21.025112    9582 out.go:177] * [false-413000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 05:08:21.033120    9582 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 05:08:21.037127    9582 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:08:21.033170    9582 notify.go:220] Checking for updates...
	I0429 05:08:21.041090    9582 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 05:08:21.044135    9582 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:08:21.047123    9582 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 05:08:21.050120    9582 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:08:21.053438    9582 config.go:182] Loaded profile config "multinode-368000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:08:21.053504    9582 config.go:182] Loaded profile config "stopped-upgrade-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 05:08:21.053559    9582 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:08:21.058136    9582 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 05:08:21.065120    9582 start.go:297] selected driver: qemu2
	I0429 05:08:21.065128    9582 start.go:901] validating driver "qemu2" against <nil>
	I0429 05:08:21.065135    9582 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:08:21.067371    9582 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 05:08:21.071123    9582 out.go:177] * Automatically selected the socket_vmnet network
	I0429 05:08:21.074148    9582 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 05:08:21.074198    9582 cni.go:84] Creating CNI manager for "false"
	I0429 05:08:21.074229    9582 start.go:340] cluster config:
	{Name:false-413000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:false-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:08:21.078372    9582 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:08:21.085930    9582 out.go:177] * Starting "false-413000" primary control-plane node in "false-413000" cluster
	I0429 05:08:21.090041    9582 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:08:21.090054    9582 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 05:08:21.090059    9582 cache.go:56] Caching tarball of preloaded images
	I0429 05:08:21.090112    9582 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 05:08:21.090116    9582 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 05:08:21.090165    9582 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/false-413000/config.json ...
	I0429 05:08:21.090175    9582 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/false-413000/config.json: {Name:mk7b962bc05f018adb9ba09483faeb3fc997d2a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:08:21.090392    9582 start.go:360] acquireMachinesLock for false-413000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:08:21.090423    9582 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "false-413000"
	I0429 05:08:21.090434    9582 start.go:93] Provisioning new machine with config: &{Name:false-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:false-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:08:21.090457    9582 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:08:21.098120    9582 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0429 05:08:21.113464    9582 start.go:159] libmachine.API.Create for "false-413000" (driver="qemu2")
	I0429 05:08:21.113490    9582 client.go:168] LocalClient.Create starting
	I0429 05:08:21.113554    9582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:08:21.113584    9582 main.go:141] libmachine: Decoding PEM data...
	I0429 05:08:21.113593    9582 main.go:141] libmachine: Parsing certificate...
	I0429 05:08:21.113645    9582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:08:21.113667    9582 main.go:141] libmachine: Decoding PEM data...
	I0429 05:08:21.113676    9582 main.go:141] libmachine: Parsing certificate...
	I0429 05:08:21.114017    9582 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:08:21.262897    9582 main.go:141] libmachine: Creating SSH key...
	I0429 05:08:21.498826    9582 main.go:141] libmachine: Creating Disk image...
	I0429 05:08:21.498848    9582 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:08:21.499087    9582 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/false-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/false-413000/disk.qcow2
	I0429 05:08:21.512458    9582 main.go:141] libmachine: STDOUT: 
	I0429 05:08:21.512477    9582 main.go:141] libmachine: STDERR: 
	I0429 05:08:21.512542    9582 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/false-413000/disk.qcow2 +20000M
	I0429 05:08:21.524030    9582 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:08:21.524045    9582 main.go:141] libmachine: STDERR: 
	I0429 05:08:21.524062    9582 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/false-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/false-413000/disk.qcow2
	I0429 05:08:21.524067    9582 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:08:21.524110    9582 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/false-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/false-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/false-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:26:90:e6:c4:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/false-413000/disk.qcow2
	I0429 05:08:21.525869    9582 main.go:141] libmachine: STDOUT: 
	I0429 05:08:21.525883    9582 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:08:21.525901    9582 client.go:171] duration metric: took 412.406959ms to LocalClient.Create
	I0429 05:08:23.528096    9582 start.go:128] duration metric: took 2.437610042s to createHost
	I0429 05:08:23.528167    9582 start.go:83] releasing machines lock for "false-413000", held for 2.437740875s
	W0429 05:08:23.528307    9582 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:08:23.534502    9582 out.go:177] * Deleting "false-413000" in qemu2 ...
	W0429 05:08:23.558981    9582 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:08:23.559012    9582 start.go:728] Will try again in 5 seconds ...
	I0429 05:08:28.561238    9582 start.go:360] acquireMachinesLock for false-413000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:08:28.561818    9582 start.go:364] duration metric: took 445.875µs to acquireMachinesLock for "false-413000"
	I0429 05:08:28.561965    9582 start.go:93] Provisioning new machine with config: &{Name:false-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:false-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:08:28.562309    9582 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:08:28.571967    9582 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0429 05:08:28.620628    9582 start.go:159] libmachine.API.Create for "false-413000" (driver="qemu2")
	I0429 05:08:28.620682    9582 client.go:168] LocalClient.Create starting
	I0429 05:08:28.620805    9582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:08:28.620875    9582 main.go:141] libmachine: Decoding PEM data...
	I0429 05:08:28.620897    9582 main.go:141] libmachine: Parsing certificate...
	I0429 05:08:28.620955    9582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:08:28.621001    9582 main.go:141] libmachine: Decoding PEM data...
	I0429 05:08:28.621021    9582 main.go:141] libmachine: Parsing certificate...
	I0429 05:08:28.621594    9582 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:08:28.776663    9582 main.go:141] libmachine: Creating SSH key...
	I0429 05:08:28.814694    9582 main.go:141] libmachine: Creating Disk image...
	I0429 05:08:28.814698    9582 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:08:28.814895    9582 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/false-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/false-413000/disk.qcow2
	I0429 05:08:28.827599    9582 main.go:141] libmachine: STDOUT: 
	I0429 05:08:28.827622    9582 main.go:141] libmachine: STDERR: 
	I0429 05:08:28.827690    9582 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/false-413000/disk.qcow2 +20000M
	I0429 05:08:28.839001    9582 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:08:28.839019    9582 main.go:141] libmachine: STDERR: 
	I0429 05:08:28.839033    9582 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/false-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/false-413000/disk.qcow2
	I0429 05:08:28.839037    9582 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:08:28.839075    9582 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/false-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/false-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/false-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:8d:36:d9:d6:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/false-413000/disk.qcow2
	I0429 05:08:28.840808    9582 main.go:141] libmachine: STDOUT: 
	I0429 05:08:28.840825    9582 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:08:28.840838    9582 client.go:171] duration metric: took 220.147666ms to LocalClient.Create
	I0429 05:08:30.843151    9582 start.go:128] duration metric: took 2.280807417s to createHost
	I0429 05:08:30.843238    9582 start.go:83] releasing machines lock for "false-413000", held for 2.281400042s
	W0429 05:08:30.843616    9582 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:08:30.863990    9582 out.go:177] 
	W0429 05:08:30.868220    9582 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:08:30.868235    9582 out.go:239] * 
	* 
	W0429 05:08:30.869574    9582 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:08:30.877291    9582 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-489000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-489000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.814345083s)

                                                
                                                
-- stdout --
	* [old-k8s-version-489000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-489000" primary control-plane node in "old-k8s-version-489000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-489000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:08:33.161175    9699 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:08:33.161312    9699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:08:33.161315    9699 out.go:304] Setting ErrFile to fd 2...
	I0429 05:08:33.161317    9699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:08:33.161451    9699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:08:33.162600    9699 out.go:298] Setting JSON to false
	I0429 05:08:33.179084    9699 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5884,"bootTime":1714386629,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 05:08:33.179145    9699 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:08:33.185415    9699 out.go:177] * [old-k8s-version-489000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 05:08:33.193468    9699 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 05:08:33.198429    9699 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:08:33.193518    9699 notify.go:220] Checking for updates...
	I0429 05:08:33.204374    9699 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 05:08:33.207356    9699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:08:33.210379    9699 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 05:08:33.213443    9699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:08:33.216757    9699 config.go:182] Loaded profile config "multinode-368000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:08:33.216831    9699 config.go:182] Loaded profile config "stopped-upgrade-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 05:08:33.216883    9699 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:08:33.221373    9699 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 05:08:33.228428    9699 start.go:297] selected driver: qemu2
	I0429 05:08:33.228436    9699 start.go:901] validating driver "qemu2" against <nil>
	I0429 05:08:33.228445    9699 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:08:33.230617    9699 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 05:08:33.233398    9699 out.go:177] * Automatically selected the socket_vmnet network
	I0429 05:08:33.236482    9699 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 05:08:33.236523    9699 cni.go:84] Creating CNI manager for ""
	I0429 05:08:33.236531    9699 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0429 05:08:33.236573    9699 start.go:340] cluster config:
	{Name:old-k8s-version-489000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-489000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:08:33.240956    9699 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:08:33.249367    9699 out.go:177] * Starting "old-k8s-version-489000" primary control-plane node in "old-k8s-version-489000" cluster
	I0429 05:08:33.253266    9699 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 05:08:33.253281    9699 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0429 05:08:33.253294    9699 cache.go:56] Caching tarball of preloaded images
	I0429 05:08:33.253350    9699 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 05:08:33.253355    9699 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0429 05:08:33.253431    9699 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/old-k8s-version-489000/config.json ...
	I0429 05:08:33.253442    9699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/old-k8s-version-489000/config.json: {Name:mkda6917d5d90e9a2d04dfd96f7556b1a959a943 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:08:33.253647    9699 start.go:360] acquireMachinesLock for old-k8s-version-489000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:08:33.253677    9699 start.go:364] duration metric: took 24.584µs to acquireMachinesLock for "old-k8s-version-489000"
	I0429 05:08:33.253688    9699 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-489000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-489000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:08:33.253711    9699 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:08:33.262391    9699 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 05:08:33.278065    9699 start.go:159] libmachine.API.Create for "old-k8s-version-489000" (driver="qemu2")
	I0429 05:08:33.278093    9699 client.go:168] LocalClient.Create starting
	I0429 05:08:33.278162    9699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:08:33.278193    9699 main.go:141] libmachine: Decoding PEM data...
	I0429 05:08:33.278202    9699 main.go:141] libmachine: Parsing certificate...
	I0429 05:08:33.278245    9699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:08:33.278267    9699 main.go:141] libmachine: Decoding PEM data...
	I0429 05:08:33.278273    9699 main.go:141] libmachine: Parsing certificate...
	I0429 05:08:33.278587    9699 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:08:33.422052    9699 main.go:141] libmachine: Creating SSH key...
	I0429 05:08:33.557720    9699 main.go:141] libmachine: Creating Disk image...
	I0429 05:08:33.557728    9699 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:08:33.557906    9699 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/disk.qcow2
	I0429 05:08:33.570638    9699 main.go:141] libmachine: STDOUT: 
	I0429 05:08:33.570661    9699 main.go:141] libmachine: STDERR: 
	I0429 05:08:33.570729    9699 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/disk.qcow2 +20000M
	I0429 05:08:33.581837    9699 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:08:33.581854    9699 main.go:141] libmachine: STDERR: 
	I0429 05:08:33.581866    9699 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/disk.qcow2
	I0429 05:08:33.581870    9699 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:08:33.581897    9699 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:04:29:06:7e:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/disk.qcow2
	I0429 05:08:33.583577    9699 main.go:141] libmachine: STDOUT: 
	I0429 05:08:33.583593    9699 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:08:33.583609    9699 client.go:171] duration metric: took 305.512541ms to LocalClient.Create
	I0429 05:08:35.585923    9699 start.go:128] duration metric: took 2.332172708s to createHost
	I0429 05:08:35.586028    9699 start.go:83] releasing machines lock for "old-k8s-version-489000", held for 2.332345542s
	W0429 05:08:35.586100    9699 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:08:35.597247    9699 out.go:177] * Deleting "old-k8s-version-489000" in qemu2 ...
	W0429 05:08:35.622486    9699 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:08:35.622508    9699 start.go:728] Will try again in 5 seconds ...
	I0429 05:08:40.624641    9699 start.go:360] acquireMachinesLock for old-k8s-version-489000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:08:40.624952    9699 start.go:364] duration metric: took 258.375µs to acquireMachinesLock for "old-k8s-version-489000"
	I0429 05:08:40.625027    9699 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-489000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-489000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:08:40.625174    9699 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:08:40.634384    9699 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 05:08:40.663444    9699 start.go:159] libmachine.API.Create for "old-k8s-version-489000" (driver="qemu2")
	I0429 05:08:40.663502    9699 client.go:168] LocalClient.Create starting
	I0429 05:08:40.663617    9699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:08:40.663671    9699 main.go:141] libmachine: Decoding PEM data...
	I0429 05:08:40.663684    9699 main.go:141] libmachine: Parsing certificate...
	I0429 05:08:40.663734    9699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:08:40.663768    9699 main.go:141] libmachine: Decoding PEM data...
	I0429 05:08:40.663780    9699 main.go:141] libmachine: Parsing certificate...
	I0429 05:08:40.664322    9699 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:08:40.811903    9699 main.go:141] libmachine: Creating SSH key...
	I0429 05:08:40.877352    9699 main.go:141] libmachine: Creating Disk image...
	I0429 05:08:40.877362    9699 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:08:40.877569    9699 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/disk.qcow2
	I0429 05:08:40.890157    9699 main.go:141] libmachine: STDOUT: 
	I0429 05:08:40.890185    9699 main.go:141] libmachine: STDERR: 
	I0429 05:08:40.890237    9699 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/disk.qcow2 +20000M
	I0429 05:08:40.901298    9699 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:08:40.901329    9699 main.go:141] libmachine: STDERR: 
	I0429 05:08:40.901347    9699 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/disk.qcow2
	I0429 05:08:40.901355    9699 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:08:40.901395    9699 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:76:6a:1c:20:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/disk.qcow2
	I0429 05:08:40.903226    9699 main.go:141] libmachine: STDOUT: 
	I0429 05:08:40.903243    9699 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:08:40.903260    9699 client.go:171] duration metric: took 239.750708ms to LocalClient.Create
	I0429 05:08:42.905425    9699 start.go:128] duration metric: took 2.280214583s to createHost
	I0429 05:08:42.905485    9699 start.go:83] releasing machines lock for "old-k8s-version-489000", held for 2.28052275s
	W0429 05:08:42.905869    9699 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-489000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-489000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:08:42.915040    9699 out.go:177] 
	W0429 05:08:42.921162    9699 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:08:42.921223    9699 out.go:239] * 
	* 
	W0429 05:08:42.923224    9699 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:08:42.932084    9699 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-489000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000: exit status 7 (58.970292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-489000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-489000 create -f testdata/busybox.yaml: exit status 1 (28.893542ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-489000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-489000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000: exit status 7 (31.932667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-489000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000: exit status 7 (31.568833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-489000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-489000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-489000 describe deploy/metrics-server -n kube-system: exit status 1 (27.104416ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-489000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-489000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000: exit status 7 (31.125084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-489000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-489000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.1895315s)

                                                
                                                
-- stdout --
	* [old-k8s-version-489000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-489000" primary control-plane node in "old-k8s-version-489000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-489000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-489000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:08:45.457766    9748 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:08:45.457907    9748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:08:45.457911    9748 out.go:304] Setting ErrFile to fd 2...
	I0429 05:08:45.457913    9748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:08:45.458039    9748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:08:45.459087    9748 out.go:298] Setting JSON to false
	I0429 05:08:45.475063    9748 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5896,"bootTime":1714386629,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 05:08:45.475123    9748 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:08:45.480400    9748 out.go:177] * [old-k8s-version-489000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 05:08:45.487280    9748 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 05:08:45.490330    9748 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:08:45.487310    9748 notify.go:220] Checking for updates...
	I0429 05:08:45.497293    9748 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 05:08:45.500351    9748 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:08:45.503289    9748 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 05:08:45.506341    9748 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:08:45.509627    9748 config.go:182] Loaded profile config "old-k8s-version-489000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0429 05:08:45.513338    9748 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0429 05:08:45.516243    9748 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:08:45.520304    9748 out.go:177] * Using the qemu2 driver based on existing profile
	I0429 05:08:45.527228    9748 start.go:297] selected driver: qemu2
	I0429 05:08:45.527234    9748 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-489000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-489000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:08:45.527287    9748 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:08:45.529781    9748 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 05:08:45.529832    9748 cni.go:84] Creating CNI manager for ""
	I0429 05:08:45.529845    9748 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0429 05:08:45.529862    9748 start.go:340] cluster config:
	{Name:old-k8s-version-489000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-489000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:08:45.534716    9748 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:08:45.543266    9748 out.go:177] * Starting "old-k8s-version-489000" primary control-plane node in "old-k8s-version-489000" cluster
	I0429 05:08:45.547326    9748 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 05:08:45.547342    9748 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0429 05:08:45.547349    9748 cache.go:56] Caching tarball of preloaded images
	I0429 05:08:45.547418    9748 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 05:08:45.547423    9748 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0429 05:08:45.547470    9748 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/old-k8s-version-489000/config.json ...
	I0429 05:08:45.547763    9748 start.go:360] acquireMachinesLock for old-k8s-version-489000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:08:45.547793    9748 start.go:364] duration metric: took 23.667µs to acquireMachinesLock for "old-k8s-version-489000"
	I0429 05:08:45.547802    9748 start.go:96] Skipping create...Using existing machine configuration
	I0429 05:08:45.547808    9748 fix.go:54] fixHost starting: 
	I0429 05:08:45.547917    9748 fix.go:112] recreateIfNeeded on old-k8s-version-489000: state=Stopped err=<nil>
	W0429 05:08:45.547924    9748 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 05:08:45.552345    9748 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-489000" ...
	I0429 05:08:45.559320    9748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:76:6a:1c:20:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/disk.qcow2
	I0429 05:08:45.561407    9748 main.go:141] libmachine: STDOUT: 
	I0429 05:08:45.561423    9748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:08:45.561446    9748 fix.go:56] duration metric: took 13.638917ms for fixHost
	I0429 05:08:45.561451    9748 start.go:83] releasing machines lock for "old-k8s-version-489000", held for 13.653667ms
	W0429 05:08:45.561456    9748 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:08:45.561485    9748 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:08:45.561489    9748 start.go:728] Will try again in 5 seconds ...
	I0429 05:08:50.563716    9748 start.go:360] acquireMachinesLock for old-k8s-version-489000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:08:50.564164    9748 start.go:364] duration metric: took 352.792µs to acquireMachinesLock for "old-k8s-version-489000"
	I0429 05:08:50.564240    9748 start.go:96] Skipping create...Using existing machine configuration
	I0429 05:08:50.564256    9748 fix.go:54] fixHost starting: 
	I0429 05:08:50.564913    9748 fix.go:112] recreateIfNeeded on old-k8s-version-489000: state=Stopped err=<nil>
	W0429 05:08:50.564930    9748 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 05:08:50.568282    9748 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-489000" ...
	I0429 05:08:50.576451    9748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:76:6a:1c:20:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/old-k8s-version-489000/disk.qcow2
	I0429 05:08:50.583912    9748 main.go:141] libmachine: STDOUT: 
	I0429 05:08:50.583964    9748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:08:50.584035    9748 fix.go:56] duration metric: took 19.781875ms for fixHost
	I0429 05:08:50.584051    9748 start.go:83] releasing machines lock for "old-k8s-version-489000", held for 19.869542ms
	W0429 05:08:50.584214    9748 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-489000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-489000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:08:50.592284    9748 out.go:177] 
	W0429 05:08:50.596318    9748 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:08:50.596366    9748 out.go:239] * 
	* 
	W0429 05:08:50.597698    9748 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:08:50.607243    9748 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-489000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000: exit status 7 (48.624584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-489000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000: exit status 7 (32.049167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-489000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-489000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-489000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.297833ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-489000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-489000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000: exit status 7 (31.829709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-489000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000: exit status 7 (31.146333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-489000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-489000 --alsologtostderr -v=1: exit status 83 (45.927667ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-489000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-489000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:08:50.864777    9767 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:08:50.865756    9767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:08:50.865760    9767 out.go:304] Setting ErrFile to fd 2...
	I0429 05:08:50.865762    9767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:08:50.865910    9767 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:08:50.866129    9767 out.go:298] Setting JSON to false
	I0429 05:08:50.866138    9767 mustload.go:65] Loading cluster: old-k8s-version-489000
	I0429 05:08:50.866333    9767 config.go:182] Loaded profile config "old-k8s-version-489000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0429 05:08:50.871050    9767 out.go:177] * The control-plane node old-k8s-version-489000 host is not running: state=Stopped
	I0429 05:08:50.875046    9767 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-489000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-489000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000: exit status 7 (30.84025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-489000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000: exit status 7 (31.38ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-385000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-385000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (9.734362208s)

                                                
                                                
-- stdout --
	* [no-preload-385000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-385000" primary control-plane node in "no-preload-385000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-385000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:08:51.329306    9790 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:08:51.329443    9790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:08:51.329446    9790 out.go:304] Setting ErrFile to fd 2...
	I0429 05:08:51.329449    9790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:08:51.329580    9790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:08:51.330665    9790 out.go:298] Setting JSON to false
	I0429 05:08:51.347291    9790 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5902,"bootTime":1714386629,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 05:08:51.347359    9790 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:08:51.350426    9790 out.go:177] * [no-preload-385000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 05:08:51.357359    9790 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 05:08:51.361310    9790 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:08:51.357430    9790 notify.go:220] Checking for updates...
	I0429 05:08:51.367256    9790 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 05:08:51.370346    9790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:08:51.373311    9790 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 05:08:51.376311    9790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:08:51.379607    9790 config.go:182] Loaded profile config "multinode-368000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:08:51.379679    9790 config.go:182] Loaded profile config "stopped-upgrade-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 05:08:51.379716    9790 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:08:51.384218    9790 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 05:08:51.391372    9790 start.go:297] selected driver: qemu2
	I0429 05:08:51.391379    9790 start.go:901] validating driver "qemu2" against <nil>
	I0429 05:08:51.391387    9790 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:08:51.393559    9790 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 05:08:51.397297    9790 out.go:177] * Automatically selected the socket_vmnet network
	I0429 05:08:51.400432    9790 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 05:08:51.400468    9790 cni.go:84] Creating CNI manager for ""
	I0429 05:08:51.400483    9790 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 05:08:51.400487    9790 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 05:08:51.400516    9790 start.go:340] cluster config:
	{Name:no-preload-385000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:08:51.404922    9790 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:08:51.413330    9790 out.go:177] * Starting "no-preload-385000" primary control-plane node in "no-preload-385000" cluster
	I0429 05:08:51.417357    9790 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:08:51.417426    9790 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/no-preload-385000/config.json ...
	I0429 05:08:51.417447    9790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/no-preload-385000/config.json: {Name:mkb3084c168b958821b13835e9f16f0496bc03b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:08:51.417458    9790 cache.go:107] acquiring lock: {Name:mk4382fa67db0a148bef0d8e0d9b85d44db29b16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:08:51.417476    9790 cache.go:107] acquiring lock: {Name:mk68e2e5c9190bb6f9238f94b632af0fb9eafc6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:08:51.417487    9790 cache.go:107] acquiring lock: {Name:mk483cf09a5d55c1850525118d5d72ca39f36c61 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:08:51.417533    9790 cache.go:115] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0429 05:08:51.417542    9790 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 66.917µs
	I0429 05:08:51.417548    9790 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0429 05:08:51.417564    9790 cache.go:107] acquiring lock: {Name:mk6c6395d844732e22c0caac5b30cfde451415ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:08:51.417593    9790 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 05:08:51.417606    9790 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 05:08:51.417658    9790 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0429 05:08:51.417663    9790 cache.go:107] acquiring lock: {Name:mk735c21f7eaa220f705143b58c35df4a7176038 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:08:51.417689    9790 cache.go:107] acquiring lock: {Name:mk462def49eaae368f7f0cde18176946abbdb07d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:08:51.417740    9790 cache.go:107] acquiring lock: {Name:mk842d97703196cdf96c3a49f3dcb6269d6cf936 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:08:51.417797    9790 cache.go:107] acquiring lock: {Name:mka52f70fad9a0e71f96a2008227c9381e67b661 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:08:51.417829    9790 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0429 05:08:51.417864    9790 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 05:08:51.417870    9790 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 05:08:51.417895    9790 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 05:08:51.417899    9790 start.go:360] acquireMachinesLock for no-preload-385000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:08:51.417927    9790 start.go:364] duration metric: took 23.542µs to acquireMachinesLock for "no-preload-385000"
	I0429 05:08:51.417937    9790 start.go:93] Provisioning new machine with config: &{Name:no-preload-385000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:no-preload-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:08:51.417968    9790 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:08:51.426316    9790 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 05:08:51.430058    9790 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 05:08:51.430137    9790 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 05:08:51.430303    9790 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0429 05:08:51.430726    9790 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 05:08:51.430817    9790 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 05:08:51.430834    9790 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 05:08:51.430851    9790 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0429 05:08:51.441768    9790 start.go:159] libmachine.API.Create for "no-preload-385000" (driver="qemu2")
	I0429 05:08:51.441789    9790 client.go:168] LocalClient.Create starting
	I0429 05:08:51.441861    9790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:08:51.441897    9790 main.go:141] libmachine: Decoding PEM data...
	I0429 05:08:51.441908    9790 main.go:141] libmachine: Parsing certificate...
	I0429 05:08:51.441951    9790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:08:51.441979    9790 main.go:141] libmachine: Decoding PEM data...
	I0429 05:08:51.441989    9790 main.go:141] libmachine: Parsing certificate...
	I0429 05:08:51.442341    9790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:08:51.593431    9790 main.go:141] libmachine: Creating SSH key...
	I0429 05:08:51.659748    9790 main.go:141] libmachine: Creating Disk image...
	I0429 05:08:51.659777    9790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:08:51.660030    9790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/disk.qcow2
	I0429 05:08:51.672653    9790 main.go:141] libmachine: STDOUT: 
	I0429 05:08:51.672677    9790 main.go:141] libmachine: STDERR: 
	I0429 05:08:51.672748    9790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/disk.qcow2 +20000M
	I0429 05:08:51.685516    9790 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:08:51.685620    9790 main.go:141] libmachine: STDERR: 
	I0429 05:08:51.685635    9790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/disk.qcow2
	I0429 05:08:51.685640    9790 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:08:51.685675    9790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:ba:6c:28:90:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/disk.qcow2
	I0429 05:08:51.687492    9790 main.go:141] libmachine: STDOUT: 
	I0429 05:08:51.687511    9790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:08:51.687528    9790 client.go:171] duration metric: took 245.735125ms to LocalClient.Create
	I0429 05:08:51.840052    9790 cache.go:162] opening:  /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0429 05:08:51.840578    9790 cache.go:162] opening:  /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0429 05:08:51.845456    9790 cache.go:162] opening:  /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0
	I0429 05:08:51.885788    9790 cache.go:162] opening:  /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0
	I0429 05:08:51.890568    9790 cache.go:162] opening:  /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0429 05:08:51.921300    9790 cache.go:162] opening:  /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0
	I0429 05:08:51.937646    9790 cache.go:162] opening:  /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0429 05:08:51.966087    9790 cache.go:157] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0429 05:08:51.966097    9790 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 548.497583ms
	I0429 05:08:51.966103    9790 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0429 05:08:53.687743    9790 start.go:128] duration metric: took 2.269768083s to createHost
	I0429 05:08:53.687770    9790 start.go:83] releasing machines lock for "no-preload-385000", held for 2.269842709s
	W0429 05:08:53.687806    9790 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:08:53.701828    9790 out.go:177] * Deleting "no-preload-385000" in qemu2 ...
	W0429 05:08:53.721178    9790 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:08:53.721192    9790 start.go:728] Will try again in 5 seconds ...
	I0429 05:08:55.566511    9790 cache.go:157] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 exists
	I0429 05:08:55.566528    9790 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0" took 4.14882725s
	I0429 05:08:55.566545    9790 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 succeeded
	I0429 05:08:55.650717    9790 cache.go:157] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 exists
	I0429 05:08:55.650729    9790 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0" took 4.23313425s
	I0429 05:08:55.650735    9790 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 succeeded
	I0429 05:08:55.768702    9790 cache.go:157] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 exists
	I0429 05:08:55.768718    9790 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0" took 4.351250041s
	I0429 05:08:55.768728    9790 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 succeeded
	I0429 05:08:55.844755    9790 cache.go:157] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0429 05:08:55.844764    9790 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.4270735s
	I0429 05:08:55.844768    9790 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0429 05:08:56.033447    9790 cache.go:157] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 exists
	I0429 05:08:56.033464    9790 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0" took 4.616027334s
	I0429 05:08:56.033473    9790 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 succeeded
	I0429 05:08:58.721339    9790 start.go:360] acquireMachinesLock for no-preload-385000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:08:58.721825    9790 start.go:364] duration metric: took 407.042µs to acquireMachinesLock for "no-preload-385000"
	I0429 05:08:58.721951    9790 start.go:93] Provisioning new machine with config: &{Name:no-preload-385000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:no-preload-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:08:58.722158    9790 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:08:58.733693    9790 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 05:08:58.774650    9790 start.go:159] libmachine.API.Create for "no-preload-385000" (driver="qemu2")
	I0429 05:08:58.774697    9790 client.go:168] LocalClient.Create starting
	I0429 05:08:58.774798    9790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:08:58.774853    9790 main.go:141] libmachine: Decoding PEM data...
	I0429 05:08:58.774865    9790 main.go:141] libmachine: Parsing certificate...
	I0429 05:08:58.774928    9790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:08:58.774966    9790 main.go:141] libmachine: Decoding PEM data...
	I0429 05:08:58.774981    9790 main.go:141] libmachine: Parsing certificate...
	I0429 05:08:58.775464    9790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:08:58.926781    9790 main.go:141] libmachine: Creating SSH key...
	I0429 05:08:58.964488    9790 main.go:141] libmachine: Creating Disk image...
	I0429 05:08:58.964498    9790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:08:58.964717    9790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/disk.qcow2
	I0429 05:08:58.977448    9790 main.go:141] libmachine: STDOUT: 
	I0429 05:08:58.977470    9790 main.go:141] libmachine: STDERR: 
	I0429 05:08:58.977524    9790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/disk.qcow2 +20000M
	I0429 05:08:58.989185    9790 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:08:58.989207    9790 main.go:141] libmachine: STDERR: 
	I0429 05:08:58.989228    9790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/disk.qcow2
	I0429 05:08:58.989233    9790 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:08:58.989264    9790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:54:50:76:d1:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/disk.qcow2
	I0429 05:08:58.991093    9790 main.go:141] libmachine: STDOUT: 
	I0429 05:08:58.991108    9790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:08:58.991119    9790 client.go:171] duration metric: took 216.418292ms to LocalClient.Create
	I0429 05:08:59.033586    9790 cache.go:157] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0429 05:08:59.033599    9790 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 7.61605075s
	I0429 05:08:59.033605    9790 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0429 05:08:59.033615    9790 cache.go:87] Successfully saved all images to host disk.
	I0429 05:09:00.993322    9790 start.go:128] duration metric: took 2.271135292s to createHost
	I0429 05:09:00.993404    9790 start.go:83] releasing machines lock for "no-preload-385000", held for 2.2715595s
	W0429 05:09:00.993788    9790 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-385000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-385000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:09:01.000000    9790 out.go:177] 
	W0429 05:09:01.011013    9790 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:09:01.011039    9790 out.go:239] * 
	* 
	W0429 05:09:01.012605    9790 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:09:01.021884    9790 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-385000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-385000 -n no-preload-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-385000 -n no-preload-385000: exit status 7 (53.255584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-385000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-935000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-935000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (10.79414025s)

                                                
                                                
-- stdout --
	* [embed-certs-935000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-935000" primary control-plane node in "embed-certs-935000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-935000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:09:00.092468    9837 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:09:00.092601    9837 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:09:00.092604    9837 out.go:304] Setting ErrFile to fd 2...
	I0429 05:09:00.092607    9837 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:09:00.092731    9837 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:09:00.093850    9837 out.go:298] Setting JSON to false
	I0429 05:09:00.110105    9837 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5911,"bootTime":1714386629,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 05:09:00.110171    9837 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:09:00.114774    9837 out.go:177] * [embed-certs-935000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 05:09:00.122674    9837 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 05:09:00.126619    9837 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:09:00.122739    9837 notify.go:220] Checking for updates...
	I0429 05:09:00.132648    9837 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 05:09:00.135656    9837 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:09:00.138688    9837 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 05:09:00.141685    9837 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:09:00.143604    9837 config.go:182] Loaded profile config "multinode-368000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:09:00.143682    9837 config.go:182] Loaded profile config "no-preload-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:09:00.143732    9837 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:09:00.147632    9837 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 05:09:00.154499    9837 start.go:297] selected driver: qemu2
	I0429 05:09:00.154506    9837 start.go:901] validating driver "qemu2" against <nil>
	I0429 05:09:00.154512    9837 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:09:00.156789    9837 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 05:09:00.159689    9837 out.go:177] * Automatically selected the socket_vmnet network
	I0429 05:09:00.162751    9837 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 05:09:00.162791    9837 cni.go:84] Creating CNI manager for ""
	I0429 05:09:00.162806    9837 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 05:09:00.162809    9837 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 05:09:00.162852    9837 start.go:340] cluster config:
	{Name:embed-certs-935000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:09:00.167338    9837 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:09:00.175628    9837 out.go:177] * Starting "embed-certs-935000" primary control-plane node in "embed-certs-935000" cluster
	I0429 05:09:00.179727    9837 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:09:00.179744    9837 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 05:09:00.179755    9837 cache.go:56] Caching tarball of preloaded images
	I0429 05:09:00.179810    9837 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 05:09:00.179815    9837 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 05:09:00.179875    9837 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/embed-certs-935000/config.json ...
	I0429 05:09:00.179886    9837 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/embed-certs-935000/config.json: {Name:mk9f30bdb0c1f7a387cfff5c6bf5ac065662af8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:09:00.180313    9837 start.go:360] acquireMachinesLock for embed-certs-935000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:09:00.993624    9837 start.go:364] duration metric: took 813.289208ms to acquireMachinesLock for "embed-certs-935000"
	I0429 05:09:00.994688    9837 start.go:93] Provisioning new machine with config: &{Name:embed-certs-935000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:embed-certs-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:09:00.995015    9837 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:09:01.006961    9837 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 05:09:01.051131    9837 start.go:159] libmachine.API.Create for "embed-certs-935000" (driver="qemu2")
	I0429 05:09:01.051190    9837 client.go:168] LocalClient.Create starting
	I0429 05:09:01.051365    9837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:09:01.051428    9837 main.go:141] libmachine: Decoding PEM data...
	I0429 05:09:01.051451    9837 main.go:141] libmachine: Parsing certificate...
	I0429 05:09:01.051518    9837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:09:01.051570    9837 main.go:141] libmachine: Decoding PEM data...
	I0429 05:09:01.051584    9837 main.go:141] libmachine: Parsing certificate...
	I0429 05:09:01.052240    9837 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:09:01.207179    9837 main.go:141] libmachine: Creating SSH key...
	I0429 05:09:01.385299    9837 main.go:141] libmachine: Creating Disk image...
	I0429 05:09:01.385307    9837 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:09:01.385462    9837 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/disk.qcow2
	I0429 05:09:01.398417    9837 main.go:141] libmachine: STDOUT: 
	I0429 05:09:01.398440    9837 main.go:141] libmachine: STDERR: 
	I0429 05:09:01.398510    9837 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/disk.qcow2 +20000M
	I0429 05:09:01.409919    9837 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:09:01.409934    9837 main.go:141] libmachine: STDERR: 
	I0429 05:09:01.409952    9837 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/disk.qcow2
	I0429 05:09:01.409956    9837 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:09:01.409987    9837 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:b0:d8:4d:b5:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/disk.qcow2
	I0429 05:09:01.411686    9837 main.go:141] libmachine: STDOUT: 
	I0429 05:09:01.411701    9837 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:09:01.411719    9837 client.go:171] duration metric: took 360.522834ms to LocalClient.Create
	I0429 05:09:03.413871    9837 start.go:128] duration metric: took 2.418832333s to createHost
	I0429 05:09:03.413924    9837 start.go:83] releasing machines lock for "embed-certs-935000", held for 2.420270416s
	W0429 05:09:03.413964    9837 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:09:03.424366    9837 out.go:177] * Deleting "embed-certs-935000" in qemu2 ...
	W0429 05:09:03.449222    9837 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:09:03.449261    9837 start.go:728] Will try again in 5 seconds ...
	I0429 05:09:08.451549    9837 start.go:360] acquireMachinesLock for embed-certs-935000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:09:08.452085    9837 start.go:364] duration metric: took 430.958µs to acquireMachinesLock for "embed-certs-935000"
	I0429 05:09:08.452243    9837 start.go:93] Provisioning new machine with config: &{Name:embed-certs-935000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:embed-certs-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:09:08.452475    9837 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:09:08.461280    9837 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 05:09:08.511969    9837 start.go:159] libmachine.API.Create for "embed-certs-935000" (driver="qemu2")
	I0429 05:09:08.512032    9837 client.go:168] LocalClient.Create starting
	I0429 05:09:08.512142    9837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:09:08.512199    9837 main.go:141] libmachine: Decoding PEM data...
	I0429 05:09:08.512216    9837 main.go:141] libmachine: Parsing certificate...
	I0429 05:09:08.512282    9837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:09:08.512325    9837 main.go:141] libmachine: Decoding PEM data...
	I0429 05:09:08.512335    9837 main.go:141] libmachine: Parsing certificate...
	I0429 05:09:08.512985    9837 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:09:08.664085    9837 main.go:141] libmachine: Creating SSH key...
	I0429 05:09:08.774108    9837 main.go:141] libmachine: Creating Disk image...
	I0429 05:09:08.774121    9837 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:09:08.774326    9837 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/disk.qcow2
	I0429 05:09:08.787086    9837 main.go:141] libmachine: STDOUT: 
	I0429 05:09:08.787108    9837 main.go:141] libmachine: STDERR: 
	I0429 05:09:08.787182    9837 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/disk.qcow2 +20000M
	I0429 05:09:08.798359    9837 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:09:08.798387    9837 main.go:141] libmachine: STDERR: 
	I0429 05:09:08.798405    9837 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/disk.qcow2
	I0429 05:09:08.798410    9837 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:09:08.798449    9837 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:ec:d2:e0:e4:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/disk.qcow2
	I0429 05:09:08.800244    9837 main.go:141] libmachine: STDOUT: 
	I0429 05:09:08.800260    9837 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:09:08.800280    9837 client.go:171] duration metric: took 288.242083ms to LocalClient.Create
	I0429 05:09:10.802458    9837 start.go:128] duration metric: took 2.349957833s to createHost
	I0429 05:09:10.802519    9837 start.go:83] releasing machines lock for "embed-certs-935000", held for 2.350415083s
	W0429 05:09:10.802923    9837 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-935000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-935000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:09:10.815415    9837 out.go:177] 
	W0429 05:09:10.827537    9837 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:09:10.827566    9837 out.go:239] * 
	* 
	W0429 05:09:10.830246    9837 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:09:10.839956    9837 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-935000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-935000 -n embed-certs-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-935000 -n embed-certs-935000: exit status 7 (65.209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-385000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-385000 create -f testdata/busybox.yaml: exit status 1 (29.685625ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-385000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-385000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-385000 -n no-preload-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-385000 -n no-preload-385000: exit status 7 (35.987459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-385000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-385000 -n no-preload-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-385000 -n no-preload-385000: exit status 7 (35.563208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-385000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-385000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-385000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-385000 describe deploy/metrics-server -n kube-system: exit status 1 (28.296334ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-385000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-385000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-385000 -n no-preload-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-385000 -n no-preload-385000: exit status 7 (32.822166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-385000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-385000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-385000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (6.093552166s)

                                                
                                                
-- stdout --
	* [no-preload-385000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-385000" primary control-plane node in "no-preload-385000" cluster
	* Restarting existing qemu2 VM for "no-preload-385000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-385000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:09:04.815457    9884 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:09:04.815587    9884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:09:04.815591    9884 out.go:304] Setting ErrFile to fd 2...
	I0429 05:09:04.815593    9884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:09:04.815722    9884 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:09:04.816720    9884 out.go:298] Setting JSON to false
	I0429 05:09:04.832778    9884 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5915,"bootTime":1714386629,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 05:09:04.832843    9884 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:09:04.838183    9884 out.go:177] * [no-preload-385000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 05:09:04.845174    9884 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 05:09:04.845222    9884 notify.go:220] Checking for updates...
	I0429 05:09:04.854156    9884 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:09:04.857185    9884 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 05:09:04.860051    9884 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:09:04.863119    9884 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 05:09:04.866181    9884 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:09:04.869428    9884 config.go:182] Loaded profile config "no-preload-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:09:04.869673    9884 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:09:04.874133    9884 out.go:177] * Using the qemu2 driver based on existing profile
	I0429 05:09:04.881151    9884 start.go:297] selected driver: qemu2
	I0429 05:09:04.881160    9884 start.go:901] validating driver "qemu2" against &{Name:no-preload-385000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:no-preload-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:09:04.881223    9884 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:09:04.883577    9884 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 05:09:04.883623    9884 cni.go:84] Creating CNI manager for ""
	I0429 05:09:04.883632    9884 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 05:09:04.883655    9884 start.go:340] cluster config:
	{Name:no-preload-385000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-385000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:09:04.887932    9884 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:09:04.896168    9884 out.go:177] * Starting "no-preload-385000" primary control-plane node in "no-preload-385000" cluster
	I0429 05:09:04.900162    9884 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:09:04.900240    9884 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/no-preload-385000/config.json ...
	I0429 05:09:04.900253    9884 cache.go:107] acquiring lock: {Name:mk4382fa67db0a148bef0d8e0d9b85d44db29b16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:09:04.900261    9884 cache.go:107] acquiring lock: {Name:mk68e2e5c9190bb6f9238f94b632af0fb9eafc6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:09:04.900262    9884 cache.go:107] acquiring lock: {Name:mk483cf09a5d55c1850525118d5d72ca39f36c61 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:09:04.900309    9884 cache.go:115] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 exists
	I0429 05:09:04.900314    9884 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0" took 65.084µs
	I0429 05:09:04.900320    9884 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 succeeded
	I0429 05:09:04.900324    9884 cache.go:115] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0429 05:09:04.900327    9884 cache.go:107] acquiring lock: {Name:mk735c21f7eaa220f705143b58c35df4a7176038 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:09:04.900331    9884 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 70.833µs
	I0429 05:09:04.900335    9884 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0429 05:09:04.900334    9884 cache.go:115] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 exists
	I0429 05:09:04.900342    9884 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0" took 85.833µs
	I0429 05:09:04.900341    9884 cache.go:107] acquiring lock: {Name:mk6c6395d844732e22c0caac5b30cfde451415ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:09:04.900347    9884 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 succeeded
	I0429 05:09:04.900353    9884 cache.go:107] acquiring lock: {Name:mka52f70fad9a0e71f96a2008227c9381e67b661 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:09:04.900366    9884 cache.go:115] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0429 05:09:04.900370    9884 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 43.542µs
	I0429 05:09:04.900374    9884 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0429 05:09:04.900379    9884 cache.go:115] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0429 05:09:04.900384    9884 cache.go:107] acquiring lock: {Name:mk842d97703196cdf96c3a49f3dcb6269d6cf936 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:09:04.900391    9884 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 50.75µs
	I0429 05:09:04.900394    9884 cache.go:115] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0429 05:09:04.900395    9884 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0429 05:09:04.900398    9884 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 45.834µs
	I0429 05:09:04.900402    9884 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0429 05:09:04.900425    9884 cache.go:115] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 exists
	I0429 05:09:04.900428    9884 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0" took 44.75µs
	I0429 05:09:04.900433    9884 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 succeeded
	I0429 05:09:04.900466    9884 cache.go:107] acquiring lock: {Name:mk462def49eaae368f7f0cde18176946abbdb07d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:09:04.900516    9884 cache.go:115] /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 exists
	I0429 05:09:04.900520    9884 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0" -> "/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0" took 79.291µs
	I0429 05:09:04.900524    9884 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 succeeded
	I0429 05:09:04.900528    9884 cache.go:87] Successfully saved all images to host disk.
	I0429 05:09:04.900707    9884 start.go:360] acquireMachinesLock for no-preload-385000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:09:04.900742    9884 start.go:364] duration metric: took 29.375µs to acquireMachinesLock for "no-preload-385000"
	I0429 05:09:04.900752    9884 start.go:96] Skipping create...Using existing machine configuration
	I0429 05:09:04.900758    9884 fix.go:54] fixHost starting: 
	I0429 05:09:04.900882    9884 fix.go:112] recreateIfNeeded on no-preload-385000: state=Stopped err=<nil>
	W0429 05:09:04.900895    9884 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 05:09:04.909167    9884 out.go:177] * Restarting existing qemu2 VM for "no-preload-385000" ...
	I0429 05:09:04.913166    9884 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:54:50:76:d1:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/disk.qcow2
	I0429 05:09:04.915246    9884 main.go:141] libmachine: STDOUT: 
	I0429 05:09:04.915266    9884 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:09:04.915295    9884 fix.go:56] duration metric: took 14.536458ms for fixHost
	I0429 05:09:04.915300    9884 start.go:83] releasing machines lock for "no-preload-385000", held for 14.554666ms
	W0429 05:09:04.915306    9884 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:09:04.915349    9884 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:09:04.915354    9884 start.go:728] Will try again in 5 seconds ...
	I0429 05:09:09.917597    9884 start.go:360] acquireMachinesLock for no-preload-385000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:09:10.802682    9884 start.go:364] duration metric: took 884.944334ms to acquireMachinesLock for "no-preload-385000"
	I0429 05:09:10.802879    9884 start.go:96] Skipping create...Using existing machine configuration
	I0429 05:09:10.802904    9884 fix.go:54] fixHost starting: 
	I0429 05:09:10.803628    9884 fix.go:112] recreateIfNeeded on no-preload-385000: state=Stopped err=<nil>
	W0429 05:09:10.803654    9884 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 05:09:10.823467    9884 out.go:177] * Restarting existing qemu2 VM for "no-preload-385000" ...
	I0429 05:09:10.830649    9884 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:54:50:76:d1:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/no-preload-385000/disk.qcow2
	I0429 05:09:10.839818    9884 main.go:141] libmachine: STDOUT: 
	I0429 05:09:10.839885    9884 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:09:10.839971    9884 fix.go:56] duration metric: took 37.070792ms for fixHost
	I0429 05:09:10.839987    9884 start.go:83] releasing machines lock for "no-preload-385000", held for 37.240125ms
	W0429 05:09:10.840138    9884 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-385000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-385000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:09:10.855445    9884 out.go:177] 
	W0429 05:09:10.859656    9884 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:09:10.859690    9884 out.go:239] * 
	* 
	W0429 05:09:10.861844    9884 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:09:10.868485    9884 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-385000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-385000 -n no-preload-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-385000 -n no-preload-385000: exit status 7 (48.904083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-385000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (6.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-935000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-935000 create -f testdata/busybox.yaml: exit status 1 (29.648042ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-935000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-935000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-935000 -n embed-certs-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-935000 -n embed-certs-935000: exit status 7 (32.824084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-935000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-935000 -n embed-certs-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-935000 -n embed-certs-935000: exit status 7 (36.57825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-385000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-385000 -n no-preload-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-385000 -n no-preload-385000: exit status 7 (36.287042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-385000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-385000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-385000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-385000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.772667ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-385000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-385000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-385000 -n no-preload-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-385000 -n no-preload-385000: exit status 7 (33.468958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-385000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-935000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-935000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-935000 describe deploy/metrics-server -n kube-system: exit status 1 (28.863667ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-935000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-935000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-935000 -n embed-certs-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-935000 -n embed-certs-935000: exit status 7 (41.058958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-385000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-385000 -n no-preload-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-385000 -n no-preload-385000: exit status 7 (33.668542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-385000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-385000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-385000 --alsologtostderr -v=1: exit status 83 (53.909875ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-385000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-385000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:09:11.146550    9918 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:09:11.146710    9918 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:09:11.146713    9918 out.go:304] Setting ErrFile to fd 2...
	I0429 05:09:11.146715    9918 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:09:11.146833    9918 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:09:11.147070    9918 out.go:298] Setting JSON to false
	I0429 05:09:11.147082    9918 mustload.go:65] Loading cluster: no-preload-385000
	I0429 05:09:11.147282    9918 config.go:182] Loaded profile config "no-preload-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:09:11.153440    9918 out.go:177] * The control-plane node no-preload-385000 host is not running: state=Stopped
	I0429 05:09:11.161366    9918 out.go:177]   To start a cluster, run: "minikube start -p no-preload-385000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-385000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-385000 -n no-preload-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-385000 -n no-preload-385000: exit status 7 (34.59925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-385000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-385000 -n no-preload-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-385000 -n no-preload-385000: exit status 7 (29.893ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-385000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-892000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-892000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (10.045614667s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-892000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-892000" primary control-plane node in "default-k8s-diff-port-892000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-892000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:09:11.847797    9960 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:09:11.847906    9960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:09:11.847909    9960 out.go:304] Setting ErrFile to fd 2...
	I0429 05:09:11.847911    9960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:09:11.848058    9960 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:09:11.849138    9960 out.go:298] Setting JSON to false
	I0429 05:09:11.865375    9960 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5922,"bootTime":1714386629,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 05:09:11.865439    9960 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:09:11.869729    9960 out.go:177] * [default-k8s-diff-port-892000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 05:09:11.880674    9960 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 05:09:11.876783    9960 notify.go:220] Checking for updates...
	I0429 05:09:11.888683    9960 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:09:11.891697    9960 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 05:09:11.894702    9960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:09:11.897694    9960 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 05:09:11.900682    9960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:09:11.904154    9960 config.go:182] Loaded profile config "embed-certs-935000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:09:11.904219    9960 config.go:182] Loaded profile config "multinode-368000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:09:11.904266    9960 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:09:11.908713    9960 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 05:09:11.915694    9960 start.go:297] selected driver: qemu2
	I0429 05:09:11.915700    9960 start.go:901] validating driver "qemu2" against <nil>
	I0429 05:09:11.915707    9960 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:09:11.918205    9960 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 05:09:11.922696    9960 out.go:177] * Automatically selected the socket_vmnet network
	I0429 05:09:11.925826    9960 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 05:09:11.925859    9960 cni.go:84] Creating CNI manager for ""
	I0429 05:09:11.925867    9960 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 05:09:11.925871    9960 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 05:09:11.925901    9960 start.go:340] cluster config:
	{Name:default-k8s-diff-port-892000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-892000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:09:11.930486    9960 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:09:11.936695    9960 out.go:177] * Starting "default-k8s-diff-port-892000" primary control-plane node in "default-k8s-diff-port-892000" cluster
	I0429 05:09:11.940743    9960 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:09:11.940759    9960 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 05:09:11.940772    9960 cache.go:56] Caching tarball of preloaded images
	I0429 05:09:11.940837    9960 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 05:09:11.940843    9960 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 05:09:11.940912    9960 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/default-k8s-diff-port-892000/config.json ...
	I0429 05:09:11.940924    9960 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/default-k8s-diff-port-892000/config.json: {Name:mk0c5b4870cc0eacbdf232535bff4aa3de3a5988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:09:11.941142    9960 start.go:360] acquireMachinesLock for default-k8s-diff-port-892000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:09:11.941180    9960 start.go:364] duration metric: took 27.667µs to acquireMachinesLock for "default-k8s-diff-port-892000"
	I0429 05:09:11.941192    9960 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-892000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-892000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:09:11.941219    9960 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:09:11.948691    9960 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 05:09:11.965782    9960 start.go:159] libmachine.API.Create for "default-k8s-diff-port-892000" (driver="qemu2")
	I0429 05:09:11.965810    9960 client.go:168] LocalClient.Create starting
	I0429 05:09:11.965878    9960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:09:11.965913    9960 main.go:141] libmachine: Decoding PEM data...
	I0429 05:09:11.965922    9960 main.go:141] libmachine: Parsing certificate...
	I0429 05:09:11.965959    9960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:09:11.965981    9960 main.go:141] libmachine: Decoding PEM data...
	I0429 05:09:11.965988    9960 main.go:141] libmachine: Parsing certificate...
	I0429 05:09:11.966329    9960 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:09:12.113335    9960 main.go:141] libmachine: Creating SSH key...
	I0429 05:09:12.315309    9960 main.go:141] libmachine: Creating Disk image...
	I0429 05:09:12.315324    9960 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:09:12.315524    9960 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2
	I0429 05:09:12.328366    9960 main.go:141] libmachine: STDOUT: 
	I0429 05:09:12.328399    9960 main.go:141] libmachine: STDERR: 
	I0429 05:09:12.328450    9960 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2 +20000M
	I0429 05:09:12.339578    9960 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:09:12.339607    9960 main.go:141] libmachine: STDERR: 
	I0429 05:09:12.339628    9960 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2
	I0429 05:09:12.339643    9960 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:09:12.339674    9960 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:f5:5d:e9:53:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2
	I0429 05:09:12.341451    9960 main.go:141] libmachine: STDOUT: 
	I0429 05:09:12.341466    9960 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:09:12.341491    9960 client.go:171] duration metric: took 375.675625ms to LocalClient.Create
	I0429 05:09:14.343693    9960 start.go:128] duration metric: took 2.40245275s to createHost
	I0429 05:09:14.343762    9960 start.go:83] releasing machines lock for "default-k8s-diff-port-892000", held for 2.402577458s
	W0429 05:09:14.343832    9960 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:09:14.355149    9960 out.go:177] * Deleting "default-k8s-diff-port-892000" in qemu2 ...
	W0429 05:09:14.386242    9960 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:09:14.386273    9960 start.go:728] Will try again in 5 seconds ...
	I0429 05:09:19.388508    9960 start.go:360] acquireMachinesLock for default-k8s-diff-port-892000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:09:19.388846    9960 start.go:364] duration metric: took 249.958µs to acquireMachinesLock for "default-k8s-diff-port-892000"
	I0429 05:09:19.389018    9960 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-892000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-892000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:09:19.389294    9960 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:09:19.397728    9960 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 05:09:19.448466    9960 start.go:159] libmachine.API.Create for "default-k8s-diff-port-892000" (driver="qemu2")
	I0429 05:09:19.448518    9960 client.go:168] LocalClient.Create starting
	I0429 05:09:19.448670    9960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:09:19.448735    9960 main.go:141] libmachine: Decoding PEM data...
	I0429 05:09:19.448752    9960 main.go:141] libmachine: Parsing certificate...
	I0429 05:09:19.448818    9960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:09:19.448862    9960 main.go:141] libmachine: Decoding PEM data...
	I0429 05:09:19.448873    9960 main.go:141] libmachine: Parsing certificate...
	I0429 05:09:19.449436    9960 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:09:19.604752    9960 main.go:141] libmachine: Creating SSH key...
	I0429 05:09:19.774374    9960 main.go:141] libmachine: Creating Disk image...
	I0429 05:09:19.774381    9960 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:09:19.774576    9960 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2
	I0429 05:09:19.787348    9960 main.go:141] libmachine: STDOUT: 
	I0429 05:09:19.787368    9960 main.go:141] libmachine: STDERR: 
	I0429 05:09:19.787438    9960 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2 +20000M
	I0429 05:09:19.798480    9960 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:09:19.798504    9960 main.go:141] libmachine: STDERR: 
	I0429 05:09:19.798518    9960 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2
	I0429 05:09:19.798525    9960 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:09:19.798561    9960 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:2c:24:7d:35:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2
	I0429 05:09:19.800179    9960 main.go:141] libmachine: STDOUT: 
	I0429 05:09:19.800195    9960 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:09:19.800209    9960 client.go:171] duration metric: took 351.685542ms to LocalClient.Create
	I0429 05:09:21.802389    9960 start.go:128] duration metric: took 2.413062833s to createHost
	I0429 05:09:21.802563    9960 start.go:83] releasing machines lock for "default-k8s-diff-port-892000", held for 2.413597042s
	W0429 05:09:21.802949    9960 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-892000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-892000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:09:21.817593    9960 out.go:177] 
	W0429 05:09:21.820733    9960 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:09:21.820778    9960 out.go:239] * 
	* 
	W0429 05:09:21.823317    9960 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:09:21.838583    9960 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-892000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (74.332458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-892000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-935000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-935000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (7.061742167s)

                                                
                                                
-- stdout --
	* [embed-certs-935000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-935000" primary control-plane node in "embed-certs-935000" cluster
	* Restarting existing qemu2 VM for "embed-certs-935000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-935000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:09:14.849122    9986 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:09:14.849260    9986 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:09:14.849263    9986 out.go:304] Setting ErrFile to fd 2...
	I0429 05:09:14.849266    9986 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:09:14.849397    9986 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:09:14.850402    9986 out.go:298] Setting JSON to false
	I0429 05:09:14.866262    9986 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5925,"bootTime":1714386629,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 05:09:14.866324    9986 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:09:14.871389    9986 out.go:177] * [embed-certs-935000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 05:09:14.878377    9986 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 05:09:14.878407    9986 notify.go:220] Checking for updates...
	I0429 05:09:14.886416    9986 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:09:14.889348    9986 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 05:09:14.892335    9986 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:09:14.895468    9986 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 05:09:14.898424    9986 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:09:14.901716    9986 config.go:182] Loaded profile config "embed-certs-935000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:09:14.901954    9986 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:09:14.906384    9986 out.go:177] * Using the qemu2 driver based on existing profile
	I0429 05:09:14.913345    9986 start.go:297] selected driver: qemu2
	I0429 05:09:14.913353    9986 start.go:901] validating driver "qemu2" against &{Name:embed-certs-935000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:embed-certs-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:09:14.913400    9986 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:09:14.915688    9986 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 05:09:14.915728    9986 cni.go:84] Creating CNI manager for ""
	I0429 05:09:14.915736    9986 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 05:09:14.915763    9986 start.go:340] cluster config:
	{Name:embed-certs-935000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-935000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:09:14.920127    9986 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:09:14.927355    9986 out.go:177] * Starting "embed-certs-935000" primary control-plane node in "embed-certs-935000" cluster
	I0429 05:09:14.931360    9986 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:09:14.931373    9986 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 05:09:14.931383    9986 cache.go:56] Caching tarball of preloaded images
	I0429 05:09:14.931436    9986 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 05:09:14.931441    9986 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 05:09:14.931492    9986 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/embed-certs-935000/config.json ...
	I0429 05:09:14.932029    9986 start.go:360] acquireMachinesLock for embed-certs-935000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:09:14.932064    9986 start.go:364] duration metric: took 28.667µs to acquireMachinesLock for "embed-certs-935000"
	I0429 05:09:14.932074    9986 start.go:96] Skipping create...Using existing machine configuration
	I0429 05:09:14.932081    9986 fix.go:54] fixHost starting: 
	I0429 05:09:14.932199    9986 fix.go:112] recreateIfNeeded on embed-certs-935000: state=Stopped err=<nil>
	W0429 05:09:14.932208    9986 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 05:09:14.940397    9986 out.go:177] * Restarting existing qemu2 VM for "embed-certs-935000" ...
	I0429 05:09:14.944479    9986 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:ec:d2:e0:e4:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/disk.qcow2
	I0429 05:09:14.946608    9986 main.go:141] libmachine: STDOUT: 
	I0429 05:09:14.946629    9986 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:09:14.946661    9986 fix.go:56] duration metric: took 14.58075ms for fixHost
	I0429 05:09:14.946669    9986 start.go:83] releasing machines lock for "embed-certs-935000", held for 14.600625ms
	W0429 05:09:14.946678    9986 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:09:14.946717    9986 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:09:14.946722    9986 start.go:728] Will try again in 5 seconds ...
	I0429 05:09:19.948803    9986 start.go:360] acquireMachinesLock for embed-certs-935000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:09:21.802698    9986 start.go:364] duration metric: took 1.853836417s to acquireMachinesLock for "embed-certs-935000"
	I0429 05:09:21.802882    9986 start.go:96] Skipping create...Using existing machine configuration
	I0429 05:09:21.802907    9986 fix.go:54] fixHost starting: 
	I0429 05:09:21.803640    9986 fix.go:112] recreateIfNeeded on embed-certs-935000: state=Stopped err=<nil>
	W0429 05:09:21.803666    9986 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 05:09:21.817592    9986 out.go:177] * Restarting existing qemu2 VM for "embed-certs-935000" ...
	I0429 05:09:21.824929    9986 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:ec:d2:e0:e4:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/embed-certs-935000/disk.qcow2
	I0429 05:09:21.834445    9986 main.go:141] libmachine: STDOUT: 
	I0429 05:09:21.834532    9986 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:09:21.834624    9986 fix.go:56] duration metric: took 31.712625ms for fixHost
	I0429 05:09:21.834651    9986 start.go:83] releasing machines lock for "embed-certs-935000", held for 31.893042ms
	W0429 05:09:21.834839    9986 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-935000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-935000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:09:21.850566    9986 out.go:177] 
	W0429 05:09:21.854771    9986 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:09:21.854807    9986 out.go:239] * 
	* 
	W0429 05:09:21.857624    9986 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:09:21.868577    9986 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-935000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-935000 -n embed-certs-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-935000 -n embed-certs-935000: exit status 7 (62.0945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-892000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-892000 create -f testdata/busybox.yaml: exit status 1 (31.904292ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-892000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-892000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (33.121792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-892000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (38.149042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-892000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-935000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-935000 -n embed-certs-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-935000 -n embed-certs-935000: exit status 7 (36.601833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-935000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-935000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-935000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.219459ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-935000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-935000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-935000 -n embed-certs-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-935000 -n embed-certs-935000: exit status 7 (33.525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-935000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-935000 -n embed-certs-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-935000 -n embed-certs-935000: exit status 7 (32.788ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-892000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-892000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-892000 describe deploy/metrics-server -n kube-system: exit status 1 (28.905334ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-892000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-892000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (39.005917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-892000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-935000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-935000 --alsologtostderr -v=1: exit status 83 (52.7615ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-935000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-935000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:09:22.168385   10021 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:09:22.168536   10021 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:09:22.168539   10021 out.go:304] Setting ErrFile to fd 2...
	I0429 05:09:22.168541   10021 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:09:22.168692   10021 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:09:22.168915   10021 out.go:298] Setting JSON to false
	I0429 05:09:22.168923   10021 mustload.go:65] Loading cluster: embed-certs-935000
	I0429 05:09:22.169118   10021 config.go:182] Loaded profile config "embed-certs-935000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:09:22.174013   10021 out.go:177] * The control-plane node embed-certs-935000 host is not running: state=Stopped
	I0429 05:09:22.182034   10021 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-935000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-935000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-935000 -n embed-certs-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-935000 -n embed-certs-935000: exit status 7 (33.532292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-935000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-935000 -n embed-certs-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-935000 -n embed-certs-935000: exit status 7 (29.7485ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-053000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-053000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (9.84239375s)

                                                
                                                
-- stdout --
	* [newest-cni-053000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-053000" primary control-plane node in "newest-cni-053000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-053000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:09:22.641750   10052 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:09:22.641895   10052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:09:22.641902   10052 out.go:304] Setting ErrFile to fd 2...
	I0429 05:09:22.641904   10052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:09:22.642047   10052 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:09:22.643354   10052 out.go:298] Setting JSON to false
	I0429 05:09:22.659579   10052 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5933,"bootTime":1714386629,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 05:09:22.659655   10052 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:09:22.664850   10052 out.go:177] * [newest-cni-053000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 05:09:22.670811   10052 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 05:09:22.674855   10052 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:09:22.670863   10052 notify.go:220] Checking for updates...
	I0429 05:09:22.681804   10052 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 05:09:22.685903   10052 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:09:22.688755   10052 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 05:09:22.691811   10052 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:09:22.695143   10052 config.go:182] Loaded profile config "default-k8s-diff-port-892000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:09:22.695207   10052 config.go:182] Loaded profile config "multinode-368000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:09:22.695266   10052 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:09:22.697760   10052 out.go:177] * Using the qemu2 driver based on user configuration
	I0429 05:09:22.704840   10052 start.go:297] selected driver: qemu2
	I0429 05:09:22.704848   10052 start.go:901] validating driver "qemu2" against <nil>
	I0429 05:09:22.704856   10052 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:09:22.707303   10052 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0429 05:09:22.707326   10052 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0429 05:09:22.710809   10052 out.go:177] * Automatically selected the socket_vmnet network
	I0429 05:09:22.713838   10052 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0429 05:09:22.713866   10052 cni.go:84] Creating CNI manager for ""
	I0429 05:09:22.713876   10052 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 05:09:22.713880   10052 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 05:09:22.713907   10052 start.go:340] cluster config:
	{Name:newest-cni-053000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-053000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:09:22.718536   10052 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:09:22.726812   10052 out.go:177] * Starting "newest-cni-053000" primary control-plane node in "newest-cni-053000" cluster
	I0429 05:09:22.730846   10052 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:09:22.730863   10052 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 05:09:22.730872   10052 cache.go:56] Caching tarball of preloaded images
	I0429 05:09:22.730949   10052 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 05:09:22.730961   10052 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 05:09:22.731045   10052 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/newest-cni-053000/config.json ...
	I0429 05:09:22.731056   10052 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/newest-cni-053000/config.json: {Name:mke89c8b33ef68666f844ec45f60e0f9df49fbcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:09:22.731434   10052 start.go:360] acquireMachinesLock for newest-cni-053000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:09:22.731469   10052 start.go:364] duration metric: took 29µs to acquireMachinesLock for "newest-cni-053000"
	I0429 05:09:22.731481   10052 start.go:93] Provisioning new machine with config: &{Name:newest-cni-053000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:newest-cni-053000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:09:22.731516   10052 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:09:22.739774   10052 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 05:09:22.757961   10052 start.go:159] libmachine.API.Create for "newest-cni-053000" (driver="qemu2")
	I0429 05:09:22.757994   10052 client.go:168] LocalClient.Create starting
	I0429 05:09:22.758061   10052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:09:22.758093   10052 main.go:141] libmachine: Decoding PEM data...
	I0429 05:09:22.758108   10052 main.go:141] libmachine: Parsing certificate...
	I0429 05:09:22.758163   10052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:09:22.758188   10052 main.go:141] libmachine: Decoding PEM data...
	I0429 05:09:22.758195   10052 main.go:141] libmachine: Parsing certificate...
	I0429 05:09:22.758732   10052 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:09:22.901892   10052 main.go:141] libmachine: Creating SSH key...
	I0429 05:09:22.971946   10052 main.go:141] libmachine: Creating Disk image...
	I0429 05:09:22.971952   10052 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:09:22.972147   10052 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/disk.qcow2
	I0429 05:09:22.984896   10052 main.go:141] libmachine: STDOUT: 
	I0429 05:09:22.984919   10052 main.go:141] libmachine: STDERR: 
	I0429 05:09:22.984965   10052 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/disk.qcow2 +20000M
	I0429 05:09:22.996129   10052 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:09:22.996146   10052 main.go:141] libmachine: STDERR: 
	I0429 05:09:22.996166   10052 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/disk.qcow2
	I0429 05:09:22.996171   10052 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:09:22.996202   10052 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:21:7d:38:77:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/disk.qcow2
	I0429 05:09:22.997977   10052 main.go:141] libmachine: STDOUT: 
	I0429 05:09:22.997992   10052 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:09:22.998011   10052 client.go:171] duration metric: took 240.012458ms to LocalClient.Create
	I0429 05:09:25.000221   10052 start.go:128] duration metric: took 2.268684292s to createHost
	I0429 05:09:25.000310   10052 start.go:83] releasing machines lock for "newest-cni-053000", held for 2.268835417s
	W0429 05:09:25.000415   10052 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:09:25.016818   10052 out.go:177] * Deleting "newest-cni-053000" in qemu2 ...
	W0429 05:09:25.045944   10052 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:09:25.045974   10052 start.go:728] Will try again in 5 seconds ...
	I0429 05:09:30.048092   10052 start.go:360] acquireMachinesLock for newest-cni-053000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:09:30.055767   10052 start.go:364] duration metric: took 7.601625ms to acquireMachinesLock for "newest-cni-053000"
	I0429 05:09:30.055826   10052 start.go:93] Provisioning new machine with config: &{Name:newest-cni-053000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:newest-cni-053000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:09:30.056056   10052 start.go:125] createHost starting for "" (driver="qemu2")
	I0429 05:09:30.066108   10052 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 05:09:30.113339   10052 start.go:159] libmachine.API.Create for "newest-cni-053000" (driver="qemu2")
	I0429 05:09:30.113406   10052 client.go:168] LocalClient.Create starting
	I0429 05:09:30.113573   10052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/ca.pem
	I0429 05:09:30.113655   10052 main.go:141] libmachine: Decoding PEM data...
	I0429 05:09:30.113670   10052 main.go:141] libmachine: Parsing certificate...
	I0429 05:09:30.113726   10052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18771-6092/.minikube/certs/cert.pem
	I0429 05:09:30.113774   10052 main.go:141] libmachine: Decoding PEM data...
	I0429 05:09:30.113793   10052 main.go:141] libmachine: Parsing certificate...
	I0429 05:09:30.114278   10052 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso...
	I0429 05:09:30.269002   10052 main.go:141] libmachine: Creating SSH key...
	I0429 05:09:30.380437   10052 main.go:141] libmachine: Creating Disk image...
	I0429 05:09:30.380450   10052 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0429 05:09:30.380640   10052 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/disk.qcow2.raw /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/disk.qcow2
	I0429 05:09:30.394251   10052 main.go:141] libmachine: STDOUT: 
	I0429 05:09:30.394276   10052 main.go:141] libmachine: STDERR: 
	I0429 05:09:30.394356   10052 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/disk.qcow2 +20000M
	I0429 05:09:30.406722   10052 main.go:141] libmachine: STDOUT: Image resized.
	
	I0429 05:09:30.406746   10052 main.go:141] libmachine: STDERR: 
	I0429 05:09:30.406757   10052 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/disk.qcow2
	I0429 05:09:30.406772   10052 main.go:141] libmachine: Starting QEMU VM...
	I0429 05:09:30.406805   10052 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:59:99:ac:af:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/disk.qcow2
	I0429 05:09:30.408591   10052 main.go:141] libmachine: STDOUT: 
	I0429 05:09:30.408609   10052 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:09:30.408625   10052 client.go:171] duration metric: took 295.20225ms to LocalClient.Create
	I0429 05:09:32.410827   10052 start.go:128] duration metric: took 2.354668708s to createHost
	I0429 05:09:32.410923   10052 start.go:83] releasing machines lock for "newest-cni-053000", held for 2.355129666s
	W0429 05:09:32.411318   10052 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-053000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-053000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:09:32.420018   10052 out.go:177] 
	W0429 05:09:32.424967   10052 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:09:32.425012   10052 out.go:239] * 
	* 
	W0429 05:09:32.427530   10052 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:09:32.440004   10052 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-053000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-053000 -n newest-cni-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-053000 -n newest-cni-053000: exit status 7 (72.059417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-053000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-892000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-892000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (5.786958416s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-892000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-892000" primary control-plane node in "default-k8s-diff-port-892000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-892000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-892000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:09:24.333489   10072 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:09:24.333639   10072 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:09:24.333642   10072 out.go:304] Setting ErrFile to fd 2...
	I0429 05:09:24.333644   10072 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:09:24.333766   10072 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:09:24.334765   10072 out.go:298] Setting JSON to false
	I0429 05:09:24.350643   10072 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5935,"bootTime":1714386629,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 05:09:24.350699   10072 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:09:24.355621   10072 out.go:177] * [default-k8s-diff-port-892000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 05:09:24.362722   10072 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 05:09:24.362759   10072 notify.go:220] Checking for updates...
	I0429 05:09:24.366659   10072 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:09:24.370642   10072 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 05:09:24.373658   10072 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:09:24.376613   10072 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 05:09:24.379658   10072 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:09:24.383023   10072 config.go:182] Loaded profile config "default-k8s-diff-port-892000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:09:24.383288   10072 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:09:24.387636   10072 out.go:177] * Using the qemu2 driver based on existing profile
	I0429 05:09:24.394684   10072 start.go:297] selected driver: qemu2
	I0429 05:09:24.394694   10072 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-892000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-892000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:09:24.394750   10072 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:09:24.396980   10072 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 05:09:24.397025   10072 cni.go:84] Creating CNI manager for ""
	I0429 05:09:24.397033   10072 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 05:09:24.397058   10072 start.go:340] cluster config:
	{Name:default-k8s-diff-port-892000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-892000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:09:24.401179   10072 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:09:24.409667   10072 out.go:177] * Starting "default-k8s-diff-port-892000" primary control-plane node in "default-k8s-diff-port-892000" cluster
	I0429 05:09:24.413832   10072 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:09:24.413847   10072 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 05:09:24.413858   10072 cache.go:56] Caching tarball of preloaded images
	I0429 05:09:24.413920   10072 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 05:09:24.413925   10072 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 05:09:24.413983   10072 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/default-k8s-diff-port-892000/config.json ...
	I0429 05:09:24.414451   10072 start.go:360] acquireMachinesLock for default-k8s-diff-port-892000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:09:25.000467   10072 start.go:364] duration metric: took 585.960833ms to acquireMachinesLock for "default-k8s-diff-port-892000"
	I0429 05:09:25.000631   10072 start.go:96] Skipping create...Using existing machine configuration
	I0429 05:09:25.000677   10072 fix.go:54] fixHost starting: 
	I0429 05:09:25.001364   10072 fix.go:112] recreateIfNeeded on default-k8s-diff-port-892000: state=Stopped err=<nil>
	W0429 05:09:25.001411   10072 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 05:09:25.009794   10072 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-892000" ...
	I0429 05:09:25.019993   10072 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:2c:24:7d:35:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2
	I0429 05:09:25.030424   10072 main.go:141] libmachine: STDOUT: 
	I0429 05:09:25.030518   10072 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:09:25.030651   10072 fix.go:56] duration metric: took 29.974208ms for fixHost
	I0429 05:09:25.030687   10072 start.go:83] releasing machines lock for "default-k8s-diff-port-892000", held for 30.156708ms
	W0429 05:09:25.030723   10072 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:09:25.030883   10072 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:09:25.030898   10072 start.go:728] Will try again in 5 seconds ...
	I0429 05:09:30.033115   10072 start.go:360] acquireMachinesLock for default-k8s-diff-port-892000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:09:30.033540   10072 start.go:364] duration metric: took 324.25µs to acquireMachinesLock for "default-k8s-diff-port-892000"
	I0429 05:09:30.033669   10072 start.go:96] Skipping create...Using existing machine configuration
	I0429 05:09:30.033692   10072 fix.go:54] fixHost starting: 
	I0429 05:09:30.034519   10072 fix.go:112] recreateIfNeeded on default-k8s-diff-port-892000: state=Stopped err=<nil>
	W0429 05:09:30.034544   10072 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 05:09:30.043098   10072 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-892000" ...
	I0429 05:09:30.046254   10072 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:2c:24:7d:35:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/default-k8s-diff-port-892000/disk.qcow2
	I0429 05:09:30.055487   10072 main.go:141] libmachine: STDOUT: 
	I0429 05:09:30.055582   10072 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:09:30.055676   10072 fix.go:56] duration metric: took 21.989ms for fixHost
	I0429 05:09:30.055695   10072 start.go:83] releasing machines lock for "default-k8s-diff-port-892000", held for 22.130042ms
	W0429 05:09:30.055873   10072 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-892000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-892000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:09:30.066022   10072 out.go:177] 
	W0429 05:09:30.070187   10072 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:09:30.070218   10072 out.go:239] * 
	* 
	W0429 05:09:30.072612   10072 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:09:30.081070   10072 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-892000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (53.588625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-892000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-892000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (36.39125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-892000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-892000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-892000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-892000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.260125ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-892000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-892000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (36.363584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-892000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-892000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (32.447666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-892000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-892000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-892000 --alsologtostderr -v=1: exit status 83 (45.102667ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-892000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-892000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:09:30.358145   10094 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:09:30.358317   10094 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:09:30.358320   10094 out.go:304] Setting ErrFile to fd 2...
	I0429 05:09:30.358322   10094 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:09:30.358462   10094 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:09:30.358699   10094 out.go:298] Setting JSON to false
	I0429 05:09:30.358708   10094 mustload.go:65] Loading cluster: default-k8s-diff-port-892000
	I0429 05:09:30.358920   10094 config.go:182] Loaded profile config "default-k8s-diff-port-892000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:09:30.363063   10094 out.go:177] * The control-plane node default-k8s-diff-port-892000 host is not running: state=Stopped
	I0429 05:09:30.367071   10094 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-892000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-892000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (31.572333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-892000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (31.152791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-892000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-053000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-053000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (5.214535791s)

                                                
                                                
-- stdout --
	* [newest-cni-053000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-053000" primary control-plane node in "newest-cni-053000" cluster
	* Restarting existing qemu2 VM for "newest-cni-053000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-053000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:09:36.063148   10146 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:09:36.063266   10146 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:09:36.063269   10146 out.go:304] Setting ErrFile to fd 2...
	I0429 05:09:36.063271   10146 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:09:36.063398   10146 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:09:36.064465   10146 out.go:298] Setting JSON to false
	I0429 05:09:36.080807   10146 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5947,"bootTime":1714386629,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 05:09:36.080873   10146 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:09:36.085859   10146 out.go:177] * [newest-cni-053000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 05:09:36.097752   10146 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 05:09:36.092917   10146 notify.go:220] Checking for updates...
	I0429 05:09:36.105806   10146 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 05:09:36.113770   10146 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 05:09:36.120743   10146 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:09:36.128852   10146 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 05:09:36.138872   10146 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:09:36.144142   10146 config.go:182] Loaded profile config "newest-cni-053000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:09:36.144444   10146 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:09:36.148741   10146 out.go:177] * Using the qemu2 driver based on existing profile
	I0429 05:09:36.156875   10146 start.go:297] selected driver: qemu2
	I0429 05:09:36.156882   10146 start.go:901] validating driver "qemu2" against &{Name:newest-cni-053000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:newest-cni-053000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:09:36.156942   10146 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:09:36.159683   10146 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0429 05:09:36.159739   10146 cni.go:84] Creating CNI manager for ""
	I0429 05:09:36.159748   10146 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 05:09:36.159770   10146 start.go:340] cluster config:
	{Name:newest-cni-053000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-053000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:09:36.164739   10146 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:09:36.171896   10146 out.go:177] * Starting "newest-cni-053000" primary control-plane node in "newest-cni-053000" cluster
	I0429 05:09:36.175886   10146 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:09:36.175905   10146 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 05:09:36.175916   10146 cache.go:56] Caching tarball of preloaded images
	I0429 05:09:36.176001   10146 preload.go:173] Found /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0429 05:09:36.176014   10146 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 05:09:36.176079   10146 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/newest-cni-053000/config.json ...
	I0429 05:09:36.176634   10146 start.go:360] acquireMachinesLock for newest-cni-053000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:09:36.176673   10146 start.go:364] duration metric: took 32.041µs to acquireMachinesLock for "newest-cni-053000"
	I0429 05:09:36.176684   10146 start.go:96] Skipping create...Using existing machine configuration
	I0429 05:09:36.176691   10146 fix.go:54] fixHost starting: 
	I0429 05:09:36.176816   10146 fix.go:112] recreateIfNeeded on newest-cni-053000: state=Stopped err=<nil>
	W0429 05:09:36.176834   10146 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 05:09:36.180724   10146 out.go:177] * Restarting existing qemu2 VM for "newest-cni-053000" ...
	I0429 05:09:36.187889   10146 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:59:99:ac:af:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/disk.qcow2
	I0429 05:09:36.190238   10146 main.go:141] libmachine: STDOUT: 
	I0429 05:09:36.190261   10146 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:09:36.190292   10146 fix.go:56] duration metric: took 13.599875ms for fixHost
	I0429 05:09:36.190298   10146 start.go:83] releasing machines lock for "newest-cni-053000", held for 13.619334ms
	W0429 05:09:36.190305   10146 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:09:36.190350   10146 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:09:36.190355   10146 start.go:728] Will try again in 5 seconds ...
	I0429 05:09:41.192480   10146 start.go:360] acquireMachinesLock for newest-cni-053000: {Name:mk3de1e714b5924061dc6b2f5fc68da0f10823ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 05:09:41.193034   10146 start.go:364] duration metric: took 426.875µs to acquireMachinesLock for "newest-cni-053000"
	I0429 05:09:41.193216   10146 start.go:96] Skipping create...Using existing machine configuration
	I0429 05:09:41.193239   10146 fix.go:54] fixHost starting: 
	I0429 05:09:41.193949   10146 fix.go:112] recreateIfNeeded on newest-cni-053000: state=Stopped err=<nil>
	W0429 05:09:41.193980   10146 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 05:09:41.197327   10146 out.go:177] * Restarting existing qemu2 VM for "newest-cni-053000" ...
	I0429 05:09:41.202513   10146 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:59:99:ac:af:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18771-6092/.minikube/machines/newest-cni-053000/disk.qcow2
	I0429 05:09:41.211536   10146 main.go:141] libmachine: STDOUT: 
	I0429 05:09:41.211600   10146 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0429 05:09:41.211662   10146 fix.go:56] duration metric: took 18.429209ms for fixHost
	I0429 05:09:41.211682   10146 start.go:83] releasing machines lock for "newest-cni-053000", held for 18.576583ms
	W0429 05:09:41.211820   10146 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-053000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-053000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0429 05:09:41.219302   10146 out.go:177] 
	W0429 05:09:41.223414   10146 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0429 05:09:41.223431   10146 out.go:239] * 
	* 
	W0429 05:09:41.225774   10146 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:09:41.233221   10146 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-053000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-053000 -n newest-cni-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-053000 -n newest-cni-053000: exit status 7 (70.451709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-053000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-053000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-053000 -n newest-cni-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-053000 -n newest-cni-053000: exit status 7 (32.069375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-053000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-053000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-053000 --alsologtostderr -v=1: exit status 83 (43.520834ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-053000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-053000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:09:41.423227   10163 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:09:41.423397   10163 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:09:41.423401   10163 out.go:304] Setting ErrFile to fd 2...
	I0429 05:09:41.423403   10163 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:09:41.423533   10163 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 05:09:41.423765   10163 out.go:298] Setting JSON to false
	I0429 05:09:41.423773   10163 mustload.go:65] Loading cluster: newest-cni-053000
	I0429 05:09:41.423983   10163 config.go:182] Loaded profile config "newest-cni-053000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:09:41.428475   10163 out.go:177] * The control-plane node newest-cni-053000 host is not running: state=Stopped
	I0429 05:09:41.431445   10163 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-053000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-053000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-053000 -n newest-cni-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-053000 -n newest-cni-053000: exit status 7 (32.389584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-053000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-053000 -n newest-cni-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-053000 -n newest-cni-053000: exit status 7 (32.6535ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-053000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.22
12 TestDownloadOnly/v1.30.0/json-events 8.47
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.08
18 TestDownloadOnly/v1.30.0/DeleteAll 0.23
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.23
21 TestBinaryMirror 0.33
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 10.26
39 TestErrorSpam/start 0.39
40 TestErrorSpam/status 0.1
41 TestErrorSpam/pause 0.13
42 TestErrorSpam/unpause 0.13
43 TestErrorSpam/stop 10.52
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.7
55 TestFunctional/serial/CacheCmd/cache/add_local 1.18
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.24
71 TestFunctional/parallel/DryRun 0.28
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.12
93 TestFunctional/parallel/License 0.28
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.42
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
126 TestFunctional/parallel/ProfileCmd/profile_list 0.11
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_addon-resizer_images 0.17
136 TestFunctional/delete_my-image_image 0.04
137 TestFunctional/delete_minikube_cached_images 0.04
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 3.4
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.33
193 TestMainNoArgs 0.04
240 TestStoppedBinaryUpgrade/Setup 1.05
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
257 TestNoKubernetes/serial/ProfileList 31.49
258 TestNoKubernetes/serial/Stop 3.39
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
270 TestStoppedBinaryUpgrade/MinikubeLogs 0.72
275 TestStartStop/group/old-k8s-version/serial/Stop 2.11
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.1
288 TestStartStop/group/no-preload/serial/Stop 3.34
289 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
297 TestStartStop/group/embed-certs/serial/Stop 3.54
300 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
308 TestStartStop/group/default-k8s-diff-port/serial/Stop 2
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
319 TestStartStop/group/newest-cni/serial/Stop 3.31
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-363000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-363000: exit status 85 (101.909792ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-363000 | jenkins | v1.33.0 | 29 Apr 24 04:43 PDT |          |
	|         | -p download-only-363000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 04:43:47
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 04:43:47.460945    6502 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:43:47.461160    6502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:43:47.461164    6502 out.go:304] Setting ErrFile to fd 2...
	I0429 04:43:47.461166    6502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:43:47.461296    6502 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	W0429 04:43:47.461390    6502 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18771-6092/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18771-6092/.minikube/config/config.json: no such file or directory
	I0429 04:43:47.462850    6502 out.go:298] Setting JSON to true
	I0429 04:43:47.480996    6502 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4398,"bootTime":1714386629,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:43:47.481068    6502 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:43:47.485884    6502 out.go:97] [download-only-363000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:43:47.490140    6502 out.go:169] MINIKUBE_LOCATION=18771
	I0429 04:43:47.486015    6502 notify.go:220] Checking for updates...
	W0429 04:43:47.486058    6502 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball: no such file or directory
	I0429 04:43:47.498854    6502 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:43:47.502642    6502 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:43:47.506161    6502 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:43:47.508993    6502 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	W0429 04:43:47.515057    6502 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 04:43:47.515263    6502 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:43:47.516957    6502 out.go:97] Using the qemu2 driver based on user configuration
	I0429 04:43:47.516975    6502 start.go:297] selected driver: qemu2
	I0429 04:43:47.516989    6502 start.go:901] validating driver "qemu2" against <nil>
	I0429 04:43:47.517078    6502 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 04:43:47.520034    6502 out.go:169] Automatically selected the socket_vmnet network
	I0429 04:43:47.525274    6502 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0429 04:43:47.525419    6502 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 04:43:47.525473    6502 cni.go:84] Creating CNI manager for ""
	I0429 04:43:47.525491    6502 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0429 04:43:47.525543    6502 start.go:340] cluster config:
	{Name:download-only-363000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:43:47.530136    6502 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:43:47.533129    6502 out.go:97] Downloading VM boot image ...
	I0429 04:43:47.533159    6502 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/iso/arm64/minikube-v1.33.0-1713736271-18706-arm64.iso
	I0429 04:43:51.893550    6502 out.go:97] Starting "download-only-363000" primary control-plane node in "download-only-363000" cluster
	I0429 04:43:51.893576    6502 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 04:43:51.948542    6502 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0429 04:43:51.948547    6502 cache.go:56] Caching tarball of preloaded images
	I0429 04:43:51.949680    6502 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 04:43:51.957686    6502 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0429 04:43:51.957692    6502 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0429 04:43:52.030855    6502 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0429 04:43:57.170059    6502 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0429 04:43:57.170225    6502 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0429 04:43:57.865604    6502 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0429 04:43:57.865813    6502 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/download-only-363000/config.json ...
	I0429 04:43:57.865832    6502 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/download-only-363000/config.json: {Name:mkc09461e31cba7ecb8f15df0ace1215d278d8e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 04:43:57.867448    6502 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 04:43:57.867629    6502 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0429 04:43:58.194257    6502 out.go:169] 
	W0429 04:43:58.198279    6502 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18771-6092/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1045c0e00 0x1045c0e00 0x1045c0e00 0x1045c0e00 0x1045c0e00 0x1045c0e00 0x1045c0e00] Decompressors:map[bz2:0x1400059d400 gz:0x1400059d408 tar:0x1400059d3b0 tar.bz2:0x1400059d3c0 tar.gz:0x1400059d3d0 tar.xz:0x1400059d3e0 tar.zst:0x1400059d3f0 tbz2:0x1400059d3c0 tgz:0x1400059d3d0 txz:0x1400059d3e0 tzst:0x1400059d3f0 xz:0x1400059d410 zip:0x1400059d420 zst:0x1400059d418] Getters:map[file:0x140020d2580 http:0x14000c922d0 https:0x14000c92320] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0429 04:43:58.198304    6502 out_reason.go:110] 
	W0429 04:43:58.205209    6502 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 04:43:58.209215    6502 out.go:169] 
	
	
	* The control-plane node download-only-363000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-363000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-363000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (8.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-647000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-647000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=qemu2 : (8.471454334s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (8.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-647000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-647000: exit status 85 (80.068167ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-363000 | jenkins | v1.33.0 | 29 Apr 24 04:43 PDT |                     |
	|         | -p download-only-363000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 29 Apr 24 04:43 PDT | 29 Apr 24 04:43 PDT |
	| delete  | -p download-only-363000        | download-only-363000 | jenkins | v1.33.0 | 29 Apr 24 04:43 PDT | 29 Apr 24 04:43 PDT |
	| start   | -o=json --download-only        | download-only-647000 | jenkins | v1.33.0 | 29 Apr 24 04:43 PDT |                     |
	|         | -p download-only-647000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 04:43:58
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 04:43:58.878627    6537 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:43:58.878768    6537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:43:58.878771    6537 out.go:304] Setting ErrFile to fd 2...
	I0429 04:43:58.878773    6537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:43:58.878903    6537 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:43:58.880023    6537 out.go:298] Setting JSON to true
	I0429 04:43:58.895896    6537 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4409,"bootTime":1714386629,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:43:58.895987    6537 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:43:58.900577    6537 out.go:97] [download-only-647000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:43:58.904570    6537 out.go:169] MINIKUBE_LOCATION=18771
	I0429 04:43:58.900689    6537 notify.go:220] Checking for updates...
	I0429 04:43:58.910591    6537 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:43:58.913612    6537 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:43:58.915138    6537 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:43:58.918611    6537 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	W0429 04:43:58.924584    6537 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 04:43:58.924811    6537 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:43:58.927564    6537 out.go:97] Using the qemu2 driver based on user configuration
	I0429 04:43:58.927572    6537 start.go:297] selected driver: qemu2
	I0429 04:43:58.927575    6537 start.go:901] validating driver "qemu2" against <nil>
	I0429 04:43:58.927610    6537 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 04:43:58.930501    6537 out.go:169] Automatically selected the socket_vmnet network
	I0429 04:43:58.935704    6537 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0429 04:43:58.935800    6537 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 04:43:58.935828    6537 cni.go:84] Creating CNI manager for ""
	I0429 04:43:58.935838    6537 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 04:43:58.935843    6537 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 04:43:58.935903    6537 start.go:340] cluster config:
	{Name:download-only-647000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-647000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:43:58.939971    6537 iso.go:125] acquiring lock: {Name:mk92d45bd69ba852dd7020b51363a0737631064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:43:58.942568    6537 out.go:97] Starting "download-only-647000" primary control-plane node in "download-only-647000" cluster
	I0429 04:43:58.942579    6537 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 04:43:58.993630    6537 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 04:43:58.993638    6537 cache.go:56] Caching tarball of preloaded images
	I0429 04:43:58.993795    6537 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 04:43:59.000164    6537 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0429 04:43:59.000170    6537 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 ...
	I0429 04:43:59.077322    6537 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4?checksum=md5:677034533668c42fec962cc52f9b3c42 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0429 04:44:03.256189    6537 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 ...
	I0429 04:44:03.256341    6537 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 ...
	I0429 04:44:03.799309    6537 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 04:44:03.799501    6537 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/download-only-647000/config.json ...
	I0429 04:44:03.799517    6537 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18771-6092/.minikube/profiles/download-only-647000/config.json: {Name:mk7e0b31ce27608eb7cbac0d8b5ae37fa69d1623 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 04:44:03.799748    6537 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 04:44:03.799864    6537 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18771-6092/.minikube/cache/darwin/arm64/v1.30.0/kubectl
	
	
	* The control-plane node download-only-647000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-647000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-647000
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.33s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-714000 --alsologtostderr --binary-mirror http://127.0.0.1:50957 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-714000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-714000
--- PASS: TestBinaryMirror (0.33s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-744000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-744000: exit status 85 (56.745167ms)

                                                
                                                
-- stdout --
	* Profile "addons-744000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-744000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-744000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-744000: exit status 85 (60.329333ms)

                                                
                                                
-- stdout --
	* Profile "addons-744000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-744000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.26s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.26s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 status: exit status 7 (32.680083ms)

                                                
                                                
-- stdout --
	nospam-789000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 status: exit status 7 (32.219292ms)

                                                
                                                
-- stdout --
	nospam-789000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 status: exit status 7 (32.324375ms)

                                                
                                                
-- stdout --
	nospam-789000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 pause: exit status 83 (41.989125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-789000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-789000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 pause: exit status 83 (41.775416ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-789000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-789000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 pause: exit status 83 (41.846542ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-789000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-789000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 unpause: exit status 83 (41.729709ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-789000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-789000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 unpause: exit status 83 (41.99525ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-789000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-789000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 unpause: exit status 83 (41.281375ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-789000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-789000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (10.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 stop: (3.670595583s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 stop: (3.282513166s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-789000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-789000 stop: (3.560751458s)
--- PASS: TestErrorSpam/stop (10.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18771-6092/.minikube/files/etc/test/nested/copy/6500/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-431000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local1320360667/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 cache add minikube-local-cache-test:functional-431000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 cache delete minikube-local-cache-test:functional-431000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-431000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 config get cpus: exit status 14 (33.01875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 config get cpus: exit status 14 (38.190667ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-431000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-431000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (165.54525ms)

                                                
                                                
-- stdout --
	* [functional-431000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:45:49.626402    7143 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:45:49.626584    7143 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:45:49.626593    7143 out.go:304] Setting ErrFile to fd 2...
	I0429 04:45:49.626597    7143 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:45:49.626764    7143 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:45:49.628127    7143 out.go:298] Setting JSON to false
	I0429 04:45:49.647288    7143 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4520,"bootTime":1714386629,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:45:49.647353    7143 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:45:49.653346    7143 out.go:177] * [functional-431000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0429 04:45:49.660262    7143 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 04:45:49.660294    7143 notify.go:220] Checking for updates...
	I0429 04:45:49.667247    7143 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:45:49.670221    7143 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:45:49.673199    7143 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:45:49.676305    7143 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 04:45:49.679294    7143 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:45:49.682618    7143 config.go:182] Loaded profile config "functional-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:45:49.682921    7143 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:45:49.687221    7143 out.go:177] * Using the qemu2 driver based on existing profile
	I0429 04:45:49.694284    7143 start.go:297] selected driver: qemu2
	I0429 04:45:49.694293    7143 start.go:901] validating driver "qemu2" against &{Name:functional-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:functional-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:45:49.694355    7143 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:45:49.701221    7143 out.go:177] 
	W0429 04:45:49.705245    7143 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0429 04:45:49.709275    7143 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-431000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-431000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-431000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (106.900208ms)

                                                
                                                
-- stdout --
	* [functional-431000] minikube v1.33.0 sur Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:45:49.857495    7154 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:45:49.857597    7154 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:45:49.857599    7154 out.go:304] Setting ErrFile to fd 2...
	I0429 04:45:49.857602    7154 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:45:49.857731    7154 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18771-6092/.minikube/bin
	I0429 04:45:49.859141    7154 out.go:298] Setting JSON to false
	I0429 04:45:49.875828    7154 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4520,"bootTime":1714386629,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0429 04:45:49.875924    7154 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:45:49.879272    7154 out.go:177] * [functional-431000] minikube v1.33.0 sur Darwin 14.4.1 (arm64)
	I0429 04:45:49.885184    7154 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 04:45:49.885253    7154 notify.go:220] Checking for updates...
	I0429 04:45:49.892288    7154 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	I0429 04:45:49.895284    7154 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0429 04:45:49.898242    7154 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:45:49.901294    7154 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	I0429 04:45:49.902592    7154 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:45:49.905468    7154 config.go:182] Loaded profile config "functional-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:45:49.905725    7154 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:45:49.910261    7154 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0429 04:45:49.915270    7154 start.go:297] selected driver: qemu2
	I0429 04:45:49.915280    7154 start.go:901] validating driver "qemu2" against &{Name:functional-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:functional-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:45:49.915343    7154 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:45:49.921279    7154 out.go:177] 
	W0429 04:45:49.925262    7154 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0429 04:45:49.929262    7154 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.381925458s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-431000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-431000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 image rm gcr.io/google-containers/addon-resizer:functional-431000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-431000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 image save --daemon gcr.io/google-containers/addon-resizer:functional-431000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-431000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "71.738875ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "35.356209ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "72.317959ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "35.519917ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.012300083s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-431000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-431000
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-431000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-431000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.4s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-838000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-838000 --output=json --user=testUser: (3.402309083s)
--- PASS: TestJSONOutput/stop/Command (3.40s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-658000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-658000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (96.76275ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"588270a2-0228-4f55-be82-47212a4538d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-658000] minikube v1.33.0 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e23dac57-4899-4458-8597-278d7bf09e12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18771"}}
	{"specversion":"1.0","id":"12e1c4b7-15e6-4f3e-a98c-0c70a2c5f70a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig"}}
	{"specversion":"1.0","id":"3a9d0cec-188f-45fc-ac64-fb2fd29ab8b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"e0ceb4f7-9c77-46bf-a91f-1bbac7997402","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"aa7d3a8c-5efa-4a90-ae05-2b651e1e3026","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube"}}
	{"specversion":"1.0","id":"05b5733e-a6f8-4864-89a1-544252edb989","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7b2abec7-99d5-4dba-b6e6-26f8539d247a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-658000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-658000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-358000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-358000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (106.758041ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-358000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18771-6092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18771-6092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-358000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-358000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.712583ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-358000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-358000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.73055325s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.757335667s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-358000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-358000: (3.385489666s)
--- PASS: TestNoKubernetes/serial/Stop (3.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-358000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-358000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (46.598416ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-358000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-358000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-383000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-489000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-489000 --alsologtostderr -v=3: (2.109228458s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000: exit status 7 (35.249375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-489000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-385000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-385000 --alsologtostderr -v=3: (3.340901166s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-385000 -n no-preload-385000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-385000 -n no-preload-385000: exit status 7 (58.693792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-385000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-935000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-935000 --alsologtostderr -v=3: (3.535297666s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-935000 -n embed-certs-935000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-935000 -n embed-certs-935000: exit status 7 (64.735792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-935000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-892000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-892000 --alsologtostderr -v=3: (1.997475042s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (2.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (64.449459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-892000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-053000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-053000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-053000 --alsologtostderr -v=3: (3.309975667s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-053000 -n newest-cni-053000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-053000 -n newest-cni-053000: exit status 7 (68.43225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-053000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-431000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2107070662/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1714391114228413000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2107070662/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1714391114228413000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2107070662/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1714391114228413000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2107070662/001/test-1714391114228413000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (58.356ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.35425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.899208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.101458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.200833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.946875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.831917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "sudo umount -f /mount-9p": exit status 83 (46.812625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-431000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-431000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2107070662/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (11.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-431000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port727278284/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (64.736ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.479042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.147916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.685375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.0995ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.763333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.455375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "sudo umount -f /mount-9p": exit status 83 (46.795416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-431000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-431000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port727278284/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (12.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-431000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3798323237/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-431000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3798323237/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-431000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3798323237/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T" /mount1: exit status 83 (87.372458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T" /mount1: exit status 83 (87.048084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T" /mount1: exit status 83 (88.3745ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T" /mount1: exit status 83 (89.368375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T" /mount1: exit status 83 (89.0115ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T" /mount1: exit status 83 (86.799625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-431000 ssh "findmnt -T" /mount1: exit status 83 (87.799458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-431000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-431000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3798323237/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-431000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3798323237/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-431000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3798323237/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (12.95s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-413000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-413000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-413000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-413000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-413000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-413000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-413000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-413000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-413000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-413000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-413000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-413000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-413000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-413000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-413000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-413000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-413000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-413000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-413000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-413000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-413000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-413000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-413000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-413000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-413000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-413000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-413000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-413000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-413000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-413000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-413000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-413000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-413000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413000"

                                                
                                                
----------------------- debugLogs end: cilium-413000 [took: 2.26088675s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-413000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-413000
--- SKIP: TestNetworkPlugins/group/cilium (2.49s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-105000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-105000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard